Commit Graph

17574 Commits

Author SHA1 Message Date
dovgopoly
3e979d1b20 pytest: fix bcli tests after sync refactor
Rewrite `test_bitcoin_failure` to reflect synchronous bcli behavior: the node now crashes on invalid bitcoind responses rather than retrying. Add `may_fail` and `broken_log` to handle expected crash.

Update `test_bitcoind_fail_first` stderr check to match the new error message format from `get_bitcoin_result`.

Update test mocks to use proper error format for "block not found".

Co-authored-by: ShahanaFarooqui <shahana.farooqui@gmail.com>
2026-02-18 14:16:29 +10:00
dovgopoly
7b1793f40d lightningd: add get_bitcoin_result for bcli response handling
Add `get_bitcoin_result` function that checks bcli plugin responses for errors and returns the result token. Previously, callbacks only detected errors when result parsing failed, ignoring the explicit error field from the plugin. Now we extract the actual error message from bcli, providing clearer reasoning when the plugin returns an error response.
2026-02-18 14:16:29 +10:00
dovgopoly
b5c300a82b bcli: convert getrawblockbyheight to synchronous execution
Also rename command_err_badjson to generic command_err helper, since error messages aren't always about bad JSON (e.g., "command failed" for non-zero exit).
2026-02-18 14:16:29 +10:00
dovgopoly
d06024cef7 bcli: convert estimatefees to synchronous execution
Add `command_err_badjson` helper for sync error handling, mirroring the async `command_err_bcli_badjson`. Store args string in `bcli_result` for consistent error messages.
2026-02-18 14:16:29 +10:00
dovgopoly
0de1350706 bcli: convert sendrawtransaction to synchronous execution 2026-02-18 14:16:29 +10:00
dovgopoly
a3e07f4f3a bcli: convert getutxout to synchronous execution 2026-02-18 14:16:29 +10:00
dovgopoly
f8c7a20403 bcli: convert getchaininfo to synchronous execution 2026-02-18 14:16:29 +10:00
dovgopoly
fad05200eb bcli: add synchronous run_bitcoin_cli for future refactor 2026-02-18 14:16:29 +10:00
Rusty Russell
963b353a30 connectd: use membuf for more efficient output queue.
This is exactly what membuf is for: it handles expansion much more
neatly.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2026-02-18 14:13:25 +10:30
Rusty Russell
afdc92fedf connectd: only do lazy transmission for *definitely* non-urgent messages.
Since we delay the others quite a lot (up to 1 second), it's better to consider
most messages "urgent" and worth immediately transmitting.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2026-02-18 14:13:25 +10:30
Rusty Russell
2436ee6f6f connectd: don't flush messages unless we have something important.
This replaces our previous nagle-based toggling.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2026-02-18 14:13:25 +10:30
Rusty Russell
8b90d40a75 connectd: pad messages with dummy pings if needed to make size uniform.
Messages are now constant.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Changelog-Added: Protocol: we now pad all peer messages to make them the same length.
2026-02-18 14:13:25 +10:30
Rusty Russell
ca2d389920 devtools/gossipwith: don't count "padding" pings towards max-messages count.
We are about to use them to make our packet size constant, and this
will upset the tests.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2026-02-18 14:13:25 +10:30
Rusty Russell
d45bc2d56e connectd: don't toggle nagle on and off, leave it always off.
We're doing our own buffering now.

We leave the is_urgent() function for two commits in the future though.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2026-02-18 14:13:25 +10:30
Rusty Russell
c23b7a492d connect: switch to using io_write_partial instead of io_write.
This gives us finer control over write sizes: for now we just cap
the write size.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2026-02-18 14:13:25 +10:30
Rusty Russell
df1ae1d680 connectd: refactor to break up "encrypt_and_send".
Do all the special treatment of the message type first.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2026-02-18 14:13:25 +10:30
Rusty Russell
7577e59f6c connectd: refactor outgoing loop.
Give us a single "next message" function to call.  This will be useful
when we want to write more than one at a time.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2026-02-18 14:13:25 +10:30
Rusty Russell
42bdb2d638 CI: run tests in the wireshark group so we can test packet sizes
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2026-02-18 14:13:25 +10:30
Rusty Russell
369338347d pytest: add fixture for checking packet sizes.
This requires access to dumpcap.  On Ubuntu, at least, this means you
need to be in the "wireshark" group.

We may also need:
	sudo ethtool -K lo gro off gso off tso off

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2026-02-18 14:13:25 +10:30
Rusty Russell
cd7afb506a pytest: remove now-invalid test.
Commit 888745be16 (dev_disconnect:
remove @ marker.) in v0.11 in April 2022) removed the '@' marker from
our dev_disconnect code, but one test still uses it.

Refactoring this code made it crash on invalid input.  The test
triggered a db issue which has been long fixed, so I'm simply removing
it.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2026-02-18 14:13:25 +10:30
Rusty Russell
4d030d83ce pytest: fix flae int test_fetchinvoice_autoconnect.
l3 doesn't just need to know about l2 (which it can get from the
channel_announcement), but needs to see the node_announcement.

Otherwise:

```
        l1, l2 = node_factory.line_graph(2, wait_for_announce=True,
                                         # No onion_message support in l1
                                         opts=[{'dev-force-features': -39},
                                               {'dev-allow-localhost': None}])
    
        l3 = node_factory.get_node()
        l3.rpc.connect(l1.info['id'], 'localhost', l1.port)
        wait_for(lambda: l3.rpc.listnodes(l2.info['id'])['nodes'] != [])
    
        offer = l2.rpc.call('offer', {'amount': '2msat',
                                      'description': 'simple test'})
>       l3.rpc.call('fetchinvoice', {'offer': offer['bolt12']})

tests/test_pay.py:4804: 
...	
>           raise RpcError(method, payload, resp['error'])
E           pyln.client.lightning.RpcError: RPC call failed: method: fetchinvoice, payload: {'offer': 'lno1qgsqvgnwgcg35z6ee2h3yczraddm72xrfua9uve2rlrm9deu7xyfzrcgqypq5zmnd9khqmr9yp6x2um5zcssxwz9sqkjtd8qwnx06lxckvu6g8w8t0ue0zsrfqqygj636s4sw7v6'}, error: {'code': 1003, 'message': 'Failed: could not route or connect directly to 033845802d25b4e074ccfd7cd8b339a41dc75bf9978a034800444b51d42b07799a: {"code":400,"message":"Unable to connect, no address known for peer"}'}
```

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2026-02-18 14:13:25 +10:30
Rusty Russell
29e0a1ddfe bkpr: limp along if we lost our db.
We can't really do decent bookkeeping any more, but don't crash!

```
bookkeeper: plugins/bkpr/recorder.c:178: find_txo_chain: Assertion `acct->open_event_db_id' failed.
bookkeeper: FATAL SIGNAL 6 (version v25.12)
0xaaaab7d51a7f send_backtrace
	common/daemon.c:38
0xaaaab7d51b2b crashdump
	common/daemon.c:83
0xffff8c0b07cf ???
	???:0
0xffff8bdf7608 __pthread_kill_implementation
	./nptl/pthread_kill.c:44
0xffff8bdacb3b __GI_raise
	../sysdeps/posix/raise.c:26
0xffff8bd97dff __GI_abort
	./stdlib/abort.c:79
0xffff8bda5cbf __assert_fail_base
	./assert/assert.c:96
0xffff8bda5d2f __assert_fail
	./assert/assert.c:105
0xaaaab7d41fd7 find_txo_chain
	plugins/bkpr/recorder.c:178
0xaaaab7d421fb account_onchain_closeheight
	plugins/bkpr/recorder.c:291
0xaaaab7d37687 do_account_close_checks
	plugins/bkpr/bookkeeper.c:884
0xaaaab7d38203 parse_and_log_chain_move
	plugins/bkpr/bookkeeper.c:1261
0xaaaab7d3871f listchainmoves_done
	plugins/bkpr/bookkeeper.c:171
0xaaaab7d4811f handle_rpc_reply
	plugins/libplugin.c:1073
0xaaaab7d4827b rpc_conn_read_response
	plugins/libplugin.c:1377
0xaaaab7d889a7 next_plan
	ccan/ccan/io/io.c:60
0xaaaab7d88f7b do_plan
	ccan/ccan/io/io.c:422
0xaaaab7d89053 io_ready
	ccan/ccan/io/io.c:439
```

Fixes: https://github.com/ElementsProject/lightning/issues/8854
Changelog-Fixed: Plugins: `bkpr_listbalances` no longer crashes if we lost our db, then do emergencyrecover and close a channel.
Reported-by: https://github.com/enaples
2026-02-17 12:10:26 +10:30
Rusty Russell
2e8261ef9e pytest: test for bkpr_listbalances after emergencyrecover.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2026-02-17 12:10:26 +10:30
Rusty Russell
b150309854 pytest: test for crash when we have dying channels and compact the gossip_store.
Before I fixed the handling of dying channels:

```
lightning_gossipd: gossip_store: can't read hdr offset 2362/2110: Success (version v25.12-279-gb38abe6-modded)
0x6537c19ecf3a send_backtrace
        common/daemon.c:38
0x6537c19f1a1d status_failed
        common/status.c:207
0x6537c19e557a gossip_store_get_with_hdr
        gossipd/gossip_store.c:527
0x6537c19e5613 check_msg_type
        gossipd/gossip_store.c:559
0x6537c19e5a36 gossip_store_set_flag
        gossipd/gossip_store.c:577
0x6537c19e5c82 gossip_store_del
        gossipd/gossip_store.c:629
0x6537c19e8ddd gossmap_manage_new_block
        gossipd/gossmap_manage.c:1362
0x6537c19e390e new_blockheight
        gossipd/gossipd.c:430
0x6537c19e3c37 recv_req
        gossipd/gossipd.c:532
0x6537c19ed22a handle_read
        common/daemon_conn.c:35
0x6537c19fbe71 next_plan
        ccan/ccan/io/io.c:60
0x6537c19fc174 do_plan
        ccan/ccan/io/io.c:422
0x6537c19fc231 io_ready
        ccan/ccan/io/io.c:439
0x6537c19fd647 io_loop
        ccan/ccan/io/poll.c:470
0x6537c19e463d main
        gossipd/gossipd.c:609
```

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2026-02-16 17:23:33 +10:30
Rusty Russell
acb8a8cc15 gossipd: dev-compact-gossip-store to manually invoke compaction.
And tests!

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2026-02-16 17:23:33 +10:30
Rusty Russell
88f3f97b7c gossipd: reset dying_channels array after compact.
Reported-by: @daywalker90
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2026-02-16 17:23:33 +10:30
Rusty Russell
912b40aeff gossipd: compact when gossip store is 80% deleted records.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Changelog-Added: `gossipd` now uses a `lightning_gossip_compactd` helper to compact the gossip_store on demand, keeping it under about 210MB.
2026-02-16 17:23:33 +10:30
Rusty Russell
15696d97bd gossipd: code to invoke compactd and reopen store.
This isn't called anywhere yet.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2026-02-16 17:23:33 +10:30
Rusty Russell
f56f8adcdf gossipd: lightningd/lightning_gossip_compactd
A new subprocess run by gossipd to create a compacted gossip store.

It's pretty simple: a linear compaction of the file.  Once it's done the amount it
was told to, then gossipd waits until it completes the last bit.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2026-02-16 17:23:33 +10:30
Rusty Russell
a966dd71ad common: expose gossip_store "header and type" single-read struct.
gossip_store.c uses this to avoid two reads, and we want to use it
elsewhere too.

Also fix old comment on gossip_store_readhdr().

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2026-02-16 17:23:33 +10:30
Rusty Russell
1fb4da075f gossipd: put the last_writes array inside struct gossip_store.
This is the file responsible for all the writing, so it should be
responsible for the rewriting if necessary (rather than
gossmap_manage).

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2026-02-16 17:23:33 +10:30
Rusty Russell
facf24b6ee devtools/gossmap-compress: create latest gossip_store version
This saves gossipd from converting it:

```
lightningd-1 2026-02-02T00:50:49.505Z DEBUG   gossipd: Time to convert version 14 store: 890 msec
```

Reducing node startup time from 1.4 seconds to 0.5 seconds.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2026-02-16 17:23:33 +10:30
Rusty Russell
445bcd040a gossipd: don't compact on startup.
We now only need to walk it if we're doing an upgrade.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Changelog-Changed: `gossipd` no longer compacts gossip_store on startup (improving start times significantly).
2026-02-16 17:23:33 +10:30
Rusty Russell
dfc4ce21de gossipd: don't gather dying channels during compaction.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2026-02-16 17:23:33 +10:30
Rusty Russell
900fd08455 gossipd: use gossmap to load the dying entries.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2026-02-16 17:23:33 +10:30
Rusty Russell
7d70e8baf2 gossmap: keep stats on live/deleted records.
This way gossmap_manage can decide when to compact.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2026-02-16 17:23:33 +10:30
Rusty Russell
1ad8ca9603 gossmap: add callback for gossipd to see dying messages.
gossmap doesn't care, so gossipd currently has to iterate through the
store to find them at startup.  Create a callback for gossipd to use
instead.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2026-02-16 17:23:33 +10:30
Rusty Russell
e8fd235d4e common: move gossip_store_wire.csv into common/ from gossipd/
It's used by common/gossip_store.c, which is used by many things other than
gossipd.  This file belongs in common.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2026-02-16 17:23:33 +10:30
Rusty Russell
5dcf39867c gossipd: write uuid record on startup.
This is the first record, and ignored by everything else.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2026-02-16 17:23:33 +10:30
Rusty Russell
25131d2ea2 common/gossmap: use the UUID record on reopen.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2026-02-16 17:23:33 +10:30
Rusty Russell
b1055aa0ac gossip_store: add UUID entry at front of the store.
We also put this in the store_ended message, too: so you can
tell if the equivalent_offset there really refers to this new
entry (or if two or more rewrites have happened).

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2026-02-16 17:23:33 +10:30
Rusty Russell
d4c62f8c4c tools: delete gossip_store of needed for downgrade even if db hasn't changed.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2026-02-16 17:23:33 +10:30
Rusty Russell
5e1bbb08c7 gossmap: reduce load times by 20%
It's actually quite quick to load a cache-hot 308,874,377 byte
gossip_store (normal -Og build), but perf does show time spent
in siphash(), which is a bit overkill here, so drop that:

Before:
	Time to load: 66718983-78037766(7.00553e+07+/-2.8e+06)nsec
	
After: 
	Time to load: 54510433-57991725(5.61457e+07+/-1e+06)nsec

We could save maybe 10% more by disabling checksums, but having
that assurance is nice.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2026-02-16 17:23:33 +10:30
Rusty Russell
120c9d8ce4 devtools/gossmap-compress: generate better scids.
Our poor scid generation clashes badly with simplified hashing (the
next patch) leading to l1's startup time when using a generated map
moving from 4 seconds to 14 seconds.  Under CI it actually timed out
several tests.

Fixing our fake scids to be more "random" reduces it to 1.5 seconds.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2026-02-16 17:23:33 +10:30
Rusty Russell
ae957161e6 pytest: fix bogus test_gossip_store_compact_noappend test.
It didn't do anything, since the dev_compact_gossip_store command was
removed.  When we make it do something, it crashes since old_len is 0:

```
gossipd: gossip_store_compact: bad version
gossipd: FATAL SIGNAL 6 (version v25.12rc3-1-g9e6c715-modded)
...
gossipd: backtrace: ./stdlib/abort.c:79 (__GI_abort) 0x7119bd8288fe
gossipd: backtrace: ./assert/assert.c:96 (__assert_fail_base) 0x7119bd82881a
gossipd: backtrace: ./assert/assert.c:105 (__assert_fail) 0x7119bd83b516
gossipd: backtrace: gossipd/gossip_store.c:52 (append_msg) 0x56294de240eb
gossipd: backtrace: gossipd/gossip_store.c:358 (gossip_store_compact) 0x56294
gossipd: backtrace: gossipd/gossip_store.c:395 (gossip_store_new) 0x56294de24
gossipd: backtrace: gossipd/gossmap_manage.c:455 (setup_gossmap) 0x56294de255
gossipd: backtrace: gossipd/gossmap_manage.c:488 (gossmap_manage_new) 0x56294
gossipd: backtrace: gossipd/gossipd.c:400 (gossip_init) 0x56294de22de9
```

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2026-02-16 17:23:33 +10:30
Rusty Russell
e5e5998cd8 devtools: enhance dump-gossipstore to show some details of messages.
Not a complete decode, just the highlights (what channel was announced
or updated, what node was announced).

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2026-02-16 17:23:33 +10:30
Rusty Russell
09781bd381 lightningd: don't assume peer existrs in peer_connected_serialize.
It's always true for the first hook invocation, but if there is more
than one plugin, it could vanish between the two!  In the default configuration, this can't happen.

This bug has been around since v23.02.

Note: we always tell all the plugins about the peer, even if it's
already gone.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Changelog-Fixed: lightningd: possible crash when peers disconnected if there was more than one plugin servicing the `peer_connected` hook.
Reported-by: https://github.com/santyr
Fixes: https://github.com/ElementsProject/lightning/issues/8858
2026-02-12 09:08:10 +10:30
Rusty Russell
eaf6fabf04 pytest: reproduce crash when node disconnects between hooks:
```
lightningd-2 2026-02-09T00:41:35.196Z TRACE   lightningd: Plugin peer_connected_logger_a.py returned from peer_connected hook call
lightningd-2 2026-02-09T00:41:35.196Z TRACE   lightningd: Calling peer_connected hook of plugin peer_connected_logger_b.py
lightningd-2 2026-02-09T00:41:35.293Z **BROKEN** lightningd: FATAL SIGNAL 11 (version v25.12-257-g2a5fbd1-modded)
lightningd-2 2026-02-09T00:41:35.293Z **BROKEN** lightningd: backtrace: common/daemon.c:46 (send_backtrace) 0x5b2abd7f29bd
lightningd-2 2026-02-09T00:41:35.293Z **BROKEN** lightningd: backtrace: common/daemon.c:83 (crashdump) 0x5b2abd7f2a0c
lightningd-2 2026-02-09T00:41:35.293Z **BROKEN** lightningd: backtrace: ./signal/../sysdeps/unix/sysv/linux/x86_64/libc_sigaction.c:0 ((null)) 0x75950d84532f
lightningd-2 2026-02-09T00:41:35.293Z **BROKEN** lightningd: backtrace: lightningd/peer_control.c:1333 (peer_connected_serialize) 0x5b2abd79c964
lightningd-2 2026-02-09T00:41:35.293Z **BROKEN** lightningd: backtrace: lightningd/plugin_hook.c:359 (plugin_hook_call_next) 0x5b2abd7ae14a
lightningd-2 2026-02-09T00:41:35.293Z **BROKEN** lightningd: backtrace: lightningd/plugin_hook.c:299 (plugin_hook_callback) 0x5b2abd7ae38f
lightningd-2 2026-02-09T00:41:35.293Z **BROKEN** lightningd: backtrace: lightningd/plugin.c:701 (plugin_response_handle) 0x5b2abd7a7e28
lightningd-2 2026-02-09T00:41:35.293Z **BROKEN** lightningd: backtrace: lightningd/plugin.c:790 (plugin_read_json) 0x5b2abd7ace9c
lightningd-2 2026-02-09T00:41:35.293Z **BROKEN** lightningd: backtrace: ccan/ccan/io/io.c:60 (next_plan) 0x5b2abd81dada
lightningd-2 2026-02-09T00:41:35.293Z **BROKEN** lightningd: backtrace: ccan/ccan/io/io.c:422 (do_plan) 0x5b2abd81def6
lightningd-2 2026-02-09T00:41:35.293Z **BROKEN** lightningd: backtrace: ccan/ccan/io/io.c:439 (io_ready) 0x5b2abd81dfb3
lightningd-2 2026-02-09T00:41:35.293Z **BROKEN** lightningd: backtrace: ccan/ccan/io/poll.c:470 (io_loop) 0x5b2abd81f0db
lightningd-2 2026-02-09T00:41:35.293Z **BROKEN** lightningd: backtrace: lightningd/io_loop_with_timers.c:22 (io_loop_with_timers) 0x5b2abd77c13b
lightningd-2 2026-02-09T00:41:35.293Z **BROKEN** lightningd: backtrace: lightningd/lightningd.c:1495 (main) 0x5b2abd781c6a
lightningd-2 2026-02-09T00:41:35.293Z **BROKEN** lightningd: backtrace: ../sysdeps/nptl/libc_start_call_main.h:58 (__libc_start_call_main) 0x75950d82a1c9
lightningd-2 2026-02-09T00:41:35.293Z **BROKEN** lightningd: backtrace: ../csu/libc-start.c:360 (__libc_start_main_impl) 0x75950d82a28a
lightningd-2 2026-02-09T00:41:35.293Z **BROKEN** lightningd: backtrace: (null):0 ((null)) 0x5b2abd752964
lightningd-2 2026-02-09T00:41:35.293Z **BROKEN** lightningd: backtrace: (null):0 ((null)) 0xffffffffffffffff
```

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2026-02-12 09:08:10 +10:30
Rusty Russell
1b1274df7e CI: reduce parallelism for pytest.
In November 2022 we seemed to increase parallelism from 2 and 3 to 10!
That is a huge load for these CI boxes, and does explain some of our
flakes.

We only run in parallel because some tests sleep, but it's diminishing
returns (GH runners have 4 VCPUs, 16GB RAM).

This reduces it so:
- Normal runs are -n 4
- Valgrind runs are -n 2
- Sanitizer runs are -n 3

If I use my beefy build box (64BG RAM) but reduce it to 4 CPUs:

Time for pytest -n 5:
Time for pytest -n 4:
Time for pytest -n 3:

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2026-02-09 19:46:22 +10:30
Rusty Russell
939aec3b61 pytest: make hold_timeout.py test plugin release on a prompt, not timeout.
Avoids guessing what the timeout should be, use a file trigger.  This
is more optimal, and should reduce a flake in test_sql under valgrind.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2026-02-09 14:48:15 +10:30