When we enter the wrong passphrase hsmd crashes like this with an unknown message type:
lightning_hsmd: Failed to load hsm_secret: Wrong passphrase (version v25.12rc1-7-g7713a42-modded)
0x102ba44bf ???
send_backtrace+0x4f:0
0x102b0900f status_failed
common/status.c:207
0x102af1a37 hsmd_send_init_reply_failure
hsmd/hsmd.c:301
0x102af1497 load_hsm
hsmd/hsmd.c:446
0x102af1497 init_hsm
hsmd/hsmd.c:548
0x102b29e63 next_plan
ccan/ccan/io/io.c:60
0x102b29e63 do_plan
ccan/ccan/io/io.c:422
0x102b29d8b io_ready
ccan/ccan/io/io.c:439
0x102b2b4bf io_loop
ccan/ccan/io/poll.c:470
0x102af0a83 main
hsmd/hsmd.c:886
lightningd: HSM sent unknown message type
This change swaps write_all() to wire_synce_write() because write_all() is missing the wire protocol length prefix. We also don't send a stack trace anymore if the user has entered the wrong passphrase and exit cleanly.
It breaks, but more importantly we don't need to install lowdown any more,
since the check in build-release.sh has been removed.
```
Run sudo apt-get install -y lowdown
Reading package lists...
Building dependency tree...
Reading state information...
The following NEW packages will be installed:
lowdown
0 upgraded, 1 newly installed, 0 to remove and 21 not upgraded.
Need to get 129 kB of archives.
After this operation, 314 kB of additional disk space will be used.
Get:1 file:/etc/apt/apt-mirrors.txt Mirrorlist [144 B]
Get:2 http://azure.archive.ubuntu.com/ubuntu noble/universe amd64 lowdown amd64 1.1.0-1 [129 kB]
Fetched 129 kB in 0s (2971 kB/s)
Selecting previously unselected package lowdown.
(Reading database ...
(Reading database ... 5%
(Reading database ... 10%
(Reading database ... 15%
(Reading database ... 20%
(Reading database ... 25%
(Reading database ... 30%
(Reading database ... 35%
(Reading database ... 40%
(Reading database ... 45%
(Reading database ... 50%
(Reading database ... 55%
(Reading database ... 60%
(Reading database ... 65%
(Reading database ... 70%
(Reading database ... 75%
(Reading database ... 80%
(Reading database ... 85%
(Reading database ... 90%
(Reading database ... 95%
(Reading database ... 100%
(Reading database ... 216225 files and directories currently installed.)
Preparing to unpack .../lowdown_1.1.0-1_amd64.deb ...
Unpacking lowdown (1.1.0-1) ...
Setting up lowdown (1.1.0-1) ...
Processing triggers for man-db (2.12.0-4build2) ...
Not building database; man-db/auto-update is not 'true'.
Running kernel seems to be up-to-date.
No services need to be restarted.
No containers need to be restarted.
checking for getpagesize() in <unistd.h>... yes
checking for isblank() in <ctype.h>... yes
checking for little endian... yes
checking for memmem in <string.h>... yes
checking for memrchr in <string.h>... yes
checking for mmap() declaration... yes
checking for /proc/self/maps exists... yes
checking for qsort_r cmp takes trailing arg... yes
checking for __attribute__((section)) and __start/__stop... yes
checking for stack grows upwards... no
checking for statement expression support... yes
checking for <sys/filio.h>... no
checking for <sys/termios.h>... yes
checking for <sys/unistd.h>... yes
checking for __typeof__ support... yes
checking for unaligned access to int... yes
checking for utime() declaration... yes
checking for __attribute__((warn_unused_result))... yes
checking for #pragma omp and -fopenmp support... yes
checking for <valgrind/memcheck.h>... no
checking for working <ucontext.h... yes
checking for passing pointers via makecontext()... yes
checking for __builtin_cpu_supports()... yes
checking for closefrom() offered by system... yes
checking for F_CLOSEM defined for fctnl.... no
checking for close_range syscall available as __NR_close_range.... yes
checking for F_MAXFD defined for fcntl.... no
checking for zlib support... yes
checking for libsodium with IETF chacha20 variants... no
checking for sqlite3... yes
checking for postgres... yes
checking for User Statically-Defined Tracing (USDT)... no
checking for compiler is GCC... yes
checking for GCC version is 7 or above... yes
Writing variables to config.vars.2200... yes
Writing header to ccan/config.h.2200... yes
checking for python3-mako... not found
checking for lowdown... found
checking for sha256sum... found
checking for jq... found
Setting PREFIX... /usr/local
Setting CC... cc
Setting CONFIGURATOR_CC... cc
Setting CWARNFLAGS... -Wall -Wundef -Wmissing-prototypes -Wmissing-declarations -Wstrict-prototypes -Wold-style-definition -Werror -Wno-maybe-uninitialized -Wshadow=local
Setting CDEBUGFLAGS... -std=gnu11 -g -fstack-protector-strong
Setting COPTFLAGS... -Og
CSANFLAGS not found
FUZZFLAGS not found
FUZZER_LIB not found
LLVM_LDFLAGS not found
SQLITE3_CFLAGS not found
Setting SQLITE3_LDLIBS... -lsqlite3
Setting POSTGRES_INCLUDE... -I/usr/include/postgresql
Setting POSTGRES_LDLIBS... -L/usr/lib/x86_64-linux-gnu -lpq
SODIUM_CFLAGS not found
SODIUM_LDLIBS not found
Setting VALGRIND... 0
Setting DEBUGBUILD... 0
Setting COMPAT... 1
Setting PYTEST... python3 -m pytest
Setting STATIC... 0
Setting CLANG_COVERAGE... 0
Setting ASAN... 0
Setting UBSAN... 0
Setting TEST_NETWORK... regtest
Setting HAVE_PYTHON3_MAKO... 0
Setting SHA256SUM... sha256sum
Setting FUZZING... 0
Setting RUST... 1
Setting PYTHON... python3
Setting SED... sed
*** We need a libsodium >= 1.0.4 (released 2015-06-11).
Error: Process completed with exit code 1.
```
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Fixes#6978 where bolt11 annotations were lost when sendonion failed early and payment was retried.
When sendonion RPC fails before saving payment to database, invstring_used flag would remain true, causing retry attempts to omit bolt11 parameter. Successful retries would then save to DB without bolt11 annotation.
Move invstring_used flag setting from payment_createonion_success to payment_sendonion_success. This ensures the flag is only set after sendonion actually succeeds. The bolt11 will be sent with every sendonion attempt until the first successful one, accepting the minor redundancy for cleaner state management.
Changelog-Fixed: Plugins: `listpays` can be missing the bolt11 information in some cases where `pay` is used.
Also added missing "added" annotation. This meant that I had to manually
change contrib/msggen/msggen/patch.py to insert that added notation where it
was missing from .msggen.json.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Changelog-None: introduced this release.
The original method name was lsps-lsps2-invoice but I somehow messed it
up and renamed during a rebase.
Changelog-Changed: lsps-jitchannel is now lsps-lsps2-invoice
Signed-off-by: Peter Neuroth <pet.v.ne@gmail.com>
We are seeing this in the CI logs, eg tests/test_connection.py::test_reconnect_sender_add1:
lightningd-1 2025-11-17T05:48:00.665Z DEBUG jsonrpc#84: Pausing parsing after 1 requests
followed by:
lightningd-1 2025-11-17T05:48:02.068Z **BROKEN** 022d223620a359a47ff7f7ac447c85c46c923da53389221a0054c11c1e3ca31d59-connectd: wake delay for WIRE_CHANNEL_REESTABLISH: 8512msec
So, what is consuming lightningd for 8 or so seconds?
This message helped diagnose that the issue was dev-memleak: fixed in a different branch.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Since we are the only writer, we don't need one.
Name (time in s) Min Max Mean StdDev Median
sqlite: test_spam_listcommands (before) 2.1193 2.4524 2.2343 0.1341 2.2229
sqlite: test_spam_listcommands (after) 2.0140 2.2349 2.1001 0.0893 2.0644
Postgres: test_spam_listcommands (before) 6.5572 6.8440 6.7067 0.1032 6.6967
Postgres: test_spam_listcommands (after) 4.4237 5.0024 4.6495 0.2278 4.6717
A nice 31% speedup!
Changelog-Changed: Postgres: significant speedup on read-only operations (e.g. 30% on empty SELECTs)
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
I had forgotten this file existed, but it needs tqdm and pytest-benchmark, so add those dev
requirements.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We always start a transaction before processing, but there are cases where
we don't need to. Switch to doing it on-demand.
This doesn't make a big difference for sqlite3, but it can for Postgres because
of the latency: 12% or so. Every bit helps!
30 runs, min-max(mean+/-stddev):
Postgres before: 8.842773-9.769030(9.19531+/-0.21)
Postgres after: 8.007967-8.321856(8.14172+/-0.066)
sqlite3 before: 7.486042-8.371831(8.15544+/-0.19)
sqlite3 after: 7.973411-8.576135(8.3025+/-0.12)
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This is slow, but will make sure we find out if we add latency spikes in future.
tests/test_coinmoves.py::test_generate_coinmoves (5,000,000, sqlite3):
Time (from start to end of l2 node): 223 seconds
Latency min/median/max: 0.0023 / 0.0033 / 0.113 seconds
tests/test_coinmoves.py::test_generate_coinmoves (5,000,000, Postgres):
Time (from start to end of l2 node): 470 seconds
Worst latency: 0.0024 / 0.0098 / 0.124 seconds
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Changelog-Fixed: lightningd: multiple signficant speedups for large nodes, especially preventing "freezes" under exceptionally high load.
This avoids latency spikes when we ask lightningd to give us 2M entries.
tests/test_coinmoves.py::test_generate_coinmoves (2,000,000, sqlite3):
Time (from start to end of l2 node): 88 seconds (was 95)
Worst latency: 0.028 seconds **WAS 4.5**
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Now we've found all the issues, the latency spike (4 seconds on my laptop)
for querying 2M elements remains.
Restore the limited sampling which we reverted, but make it 10,000 now.
This doesn't help our worst-case latency, because sql still asks for all 2M entries on
first access. We address that next.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We have a reasonable number of commands now, and we *already* keep a
strmap for the usage strings. So simply keep the usage and the command
in the map, and skip the array.
tests/test_coinmoves.py::test_generate_coinmoves (2,000,000, sqlite3):
Time (from start to end of l2 node): 95 seconds (was 102)
Worst latency: 4.5 seconds
tests/test_coinmoves.py::test_generate_coinmoves (2,000,000, Postgres):
Time (from start to end of l2 node): 231 seconds
Worst latency: 4.8 seconds
Note the values compare against 25.09.2 (Postgres):
sqlite3:
Time (from start to end of l2 node): 403 seconds
Postgres:
Time (from start to end of l2 node): 671 seconds
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
tests/test_coinmoves.py::test_generate_coinmoves (2,000,000, sqlite3):
Time (from start to end of l2 node): 102 seconds **WAS 126**
Worst latency: 4.5 seconds **WAS 5.1**
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Now that ccan/io rotates through callbacks, we can call io_always() to
yield.
We're now fast enough that this doesn't have any effect on this test,
bit it's still good to have.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Now that ccan/io rotates through callbacks, we can call io_always() to yield.
Though it doesn't fire on our benchmark, it's a good thing to do.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This rotates through fds explicitly, to avoid unfairness.
This doesn't really make a difference until we start using it.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We would keep parsing if we were out of tokens, even if we had actually
finished one object!
These are comparison against the "xpay: use filtering on rpc_command
so we only get called on "pay"." not the disasterous previous one!
tests/test_coinmoves.py::test_generate_coinmoves (2,000,000, sqlite3):
Time (from start to end of l2 node): 126 seconds (was 135)
Worst latency: 5.1 seconds **WAS 12.1**
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
A client can do this by sending a large request, so this allows us to see what
happens if they do that, even though 1MB (2MB buffer) is more than we need.
This drives our performance through the floor: see next patch which gets
us back on track.
tests/test_coinmoves.py::test_generate_coinmoves (2,000,000, sqlite3):
Time (from start to end of l2 node): 271 seconds **WAS 135**
Worst latency: 105 seconds **WAS 12.1**
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Changelog-Added: Plugins: "filters" can be specified on the `custommsg` hook to limit what message types the hook will be called for.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Changelog-Added: pyln-client: optional filters can be given when hooks are registered (for supported hooks)
tests/test_coinmoves.py::test_generate_coinmoves (2,000,000, sqlite3):
Time (from start to end of l2 node): 135 seconds **WAS 227**
Worst latency: 12.1 seconds **WAS 62.4**
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
xpay is relying on the destructor to send another request. This means
that it doesn't actually submit the request until *next time* we wake.
This has been in xpay from the start, but it is not noticeable until
xpay stops subscribing to every command on the rpc_command hook.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Changelog-Added: Plugins: the `rpc_command` hook can now specify a "filter" on what commands it is interested in.
We're going to use this on the "rpc_command" hook, to allow xpay to specify that it
only wants to be called on "pay" commands.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This potentially saves us some reads (not measurably though), at cost
of less fairness. It's important to measure though, because a single
large request will increase buffer size for successive requests, so we
can see this pattern in real usage.
tests/test_coinmoves.py::test_generate_coinmoves (2,000,000, sqlite3):
Time (from start to end of l2 node): 227 seconds (was 239)
Worst latency: 62.4 seconds (was 56.9)
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Profiling shows us spending all our time in tal_arr_remove when dealing
with a giant number of output streams. This applies both for RPC output
and plugin output.
Use linked list instead.
tests/test_coinmoves.py::test_generate_coinmoves (2,000,000, sqlite3):
Time (from start to end of l2 node): 239 seconds **WAS 518**
Worst latency: 56.9 seconds **WAS 353**
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Now we've rid ourselves of the worst offenders, we can make this a real
stress test. We remove plugin io saving and low-level logging, to avoid
benchmarking testing artifacts.
Here are the results:
tests/test_coinmoves.py::test_generate_coinmoves (2,000,000, sqlite3):
Time (from start to end of l2 node): 518 seconds
Worst latency: 353 seconds
tests/test_coinmoves.py::test_generate_coinmoves (2,000,000, Postgres):
Time (from start to end of l2 node): 417 seconds
Worst latency: 96.6 seconds
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
If we only have 8 or fewer spans at once (as is the normal case), don't
do allocation, which might interfere with tracing.
This doesn't change our test_generate_coinmoves() benchmark.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
If we have USDT compiled in, scanning the array of spans becomes
prohibitive if we have really large numbers of requests. In the
bookkeeper code, when catching up with 1.6M channel events, this
became clear in profiling.
Use a hash table instead.
Before:
tests/test_coinmoves.py::test_generate_coinmoves (100,000, sqlite3):
Time (from start to end of l2 node): 269 seconds (vs 14 with HAVE_USDT=0)
Worst latency: 4.0 seconds
After:
tests/test_coinmoves.py::test_generate_coinmoves (100,000, sqlite3):
Time (from start to end of l2 node): 14 seconds
Worst latency: 4.3 seconds
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>