doc: Fix mdx format errors generated due to Readme v2 migrations
Changelog-None: Documentation fixes only.
This commit is contained in:
committed by
Rusty Russell
parent
edfb64c736
commit
ff0ee6dfa0
@@ -9,86 +9,80 @@ privacy:
|
||||
|
||||
To recover in-channel funds, you need to use one or more of the backup strategies below.
|
||||
|
||||
|
||||
## SQLITE3 `--wallet=${main}:${backup}` And Remote NFS Mount
|
||||
|
||||
|
||||
> 📘 Who should do this:
|
||||
>
|
||||
> Casual users.
|
||||
|
||||
|
||||
> 🚧
|
||||
>
|
||||
> This technique is only supported after the version v0.10.2 (not included) or later.
|
||||
>
|
||||
> On earlier versions, the `:` character is not special and will be considered part of the path of the database file.
|
||||
|
||||
|
||||
When using the SQLITE3 backend (the default), you can specify a second database file to replicate to, by separating the second file with a single `:` character in the `--wallet` option, after the main database filename.
|
||||
|
||||
For example, if the user running `lightningd` is named `user`, and you are on the Bitcoin mainnet with the default `${LIGHTNINGDIR}`, you can specify in your `config` file:
|
||||
|
||||
```shell
|
||||
wallet=sqlite3:///home/user/.lightning/bitcoin/lightningd.sqlite3:/my/backup/lightningd.sqlite3
|
||||
```
|
||||
|
||||
Or via command line:
|
||||
|
||||
```
|
||||
lightningd --wallet=sqlite3:///home/user/.lightning/bitcoin/lightningd.sqlite3:/my/backup/lightningd.sqlite3
|
||||
```
|
||||
|
||||
If the second database file does not exist but the directory that would contain it does exist, the file is created.
|
||||
If the directory of the second database file does not exist, `lightningd` will fail at startup.
|
||||
If the second database file already exists, on startup it will be overwritten with the main database.
|
||||
If the second database file does not exist but the directory that would contain it does exist, the file is created.
|
||||
|
||||
If the directory of the second database file does not exist, `lightningd` will fail at startup.
|
||||
|
||||
If the second database file already exists, on startup it will be overwritten with the main database.
|
||||
|
||||
During operation, all database updates will be done on both databases.
|
||||
|
||||
The main and backup files will **not** be identical at every byte, but they will still contain the same data.
|
||||
|
||||
It is recommended that you use **the same filename** for both files, just on different directories.
|
||||
|
||||
This has the advantage compared to the `backup` plugin below of requiring exactly the same amount of space on both the main and backup storage. The `backup` plugin will take more space on the backup than on the main storage.
|
||||
This has the advantage compared to the `backup` plugin below of requiring exactly the same amount of space on both the main and backup storage. The `backup` plugin will take more space on the backup than on the main storage.
|
||||
|
||||
It has the disadvantage that it will only work with the SQLITE3 backend and is not supported by the PostgreSQL backend, and is unlikely to be supported on any future database backends.
|
||||
|
||||
You can only specify _one_ replica.
|
||||
|
||||
It is recommended that you use a network-mounted filesystem for the backup destination.
|
||||
It is recommended that you use a network-mounted filesystem for the backup destination.
|
||||
|
||||
For example, if you have a NAS you can access remotely. Note you need to mount the network filesystem using NFS version 4.
|
||||
|
||||
At the minimum, set the backup to a different storage device.
|
||||
This is no better than just using RAID-1 (and the RAID-1 will probably be faster) but this is easier to set up --- just plug in a commodity USB flash disk (with metal casing, since a lot of writes are done and you need to dissipate the heat quickly) and use it as the backup location, without
|
||||
repartitioning your OS disk, for example.
|
||||
At the minimum, set the backup to a different storage device.
|
||||
|
||||
This is no better than just using RAID-1 (and the RAID-1 will probably be faster) but this is easier to set up — just plug in a commodity USB flash disk (with metal casing, since a lot of writes are done and you need to dissipate the heat quickly) and use it as the backup location, without repartitioning your OS disk, for example.
|
||||
|
||||
> 📘
|
||||
>
|
||||
> Do note that files are not stored encrypted, so you should really not do this with rented space ("cloud storage").
|
||||
|
||||
To recover, simply get **all** the backup database files.
|
||||
|
||||
To recover, simply get **all** the backup database files.
|
||||
Note that SQLITE3 will sometimes create a `-journal` or `-wal` file, which is necessary to ensure correct recovery of the backup; you need to copy those too, with corresponding renames if you use a different filename for the backup database, e.g. if you named the backup `backup.sqlite3` and when you recover you find `backup.sqlite3` and `backup.sqlite3-journal` files, you rename `backup.sqlite3` to `lightningd.sqlite3` and
|
||||
`backup.sqlite3-journal` to `lightningd.sqlite3-journal`.
|
||||
Note that the `-journal` or `-wal` file may or may not exist, but if they _do_, you _must_ recover them as well (there can be an `-shm` file as well in WAL mode, but it is unnecessary;
|
||||
it is only used by SQLITE3 as a hack for portable shared memory, and contains no useful data; SQLITE3 will ignore its contents always).
|
||||
It is recommended that you use **the same filename** for both main and backup databases (just on different directories), and put the backup in its own directory, so that you can just recover all the files in that directory without worrying about missing any needed files or correctly
|
||||
renaming.
|
||||
Note that SQLITE3 will sometimes create a `-journal` or `-wal` file, which is necessary to ensure correct recovery of the backup; you need to copy those too, with corresponding renames if you use a different filename for the backup database, e.g. if you named the backup `backup.sqlite3` and when you recover you find `backup.sqlite3` and `backup.sqlite3-journal` files, you rename `backup.sqlite3` to `lightningd.sqlite3` and `backup.sqlite3-journal` to `lightningd.sqlite3-journal`.
|
||||
|
||||
Note that the `-journal` or `-wal` file may or may not exist, but if they _do_, you _must_ recover them as well (there can be an `-shm` file as well in WAL mode, but it is unnecessary; it is only used by SQLITE3 as a hack for portable shared memory, and contains no useful data; SQLITE3 will ignore its contents always).
|
||||
|
||||
It is recommended that you use **the same filename** for both main and backup databases (just on different directories), and put the backup in its own directory, so that you can just recover all the files in that directory without worrying about missing any needed files or correctly renaming.
|
||||
|
||||
If your backup destination is a network-mounted filesystem that is in a remote location, then even loss of all hardware in one location will allow you to still recover your Lightning funds.
|
||||
|
||||
However, if instead you are just replicating the database on another storage device in a single location, you remain vulnerable to disasters like fire or computer confiscation.
|
||||
|
||||
|
||||
## `backup` Plugin And Remote NFS Mount
|
||||
|
||||
|
||||
> 📘 Who should do this:
|
||||
>
|
||||
> Casual users.
|
||||
|
||||
|
||||
You can find the full source for the `backup` plugin here:
|
||||
<https://github.com/lightningd/plugins/tree/master/backup>
|
||||
You can find the full source for the `backup` plugin here:
|
||||
https://github.com/lightningd/plugins/tree/master/backup
|
||||
|
||||
The `backup` plugin requires Python 3.
|
||||
|
||||
@@ -100,61 +94,55 @@ The `backup` plugin requires Python 3.
|
||||
- Figure out where you will put the backup files.
|
||||
- Ideally you have an NFS or other network-based mount on your system, into which you will put the backup.
|
||||
- Stop your Lightning node.
|
||||
- `/path/to/backup-cli init --lightning-dir ${LIGHTNINGDIR} file:///path/to/nfs/mount/file.bkp`.
|
||||
- `/path/to/backup-cli init --lightning-dir ${LIGHTNINGDIR} file:///path/to/nfs/mount/file.bkp`.
|
||||
This creates an initial copy of the database at the NFS mount.
|
||||
- Add these settings to your `lightningd` configuration:
|
||||
- `important-plugin=/path/to/backup.py`
|
||||
- Restart your Lightning node.
|
||||
|
||||
It is recommended that you use a network-mounted filesystem for the backup destination.
|
||||
For example, if you have a NAS you can access remotely.
|
||||
It is recommended that you use a network-mounted filesystem for the backup destination.
|
||||
|
||||
For example, if you have a NAS you can access remotely.
|
||||
|
||||
> 📘
|
||||
>
|
||||
> Do note that files are not stored encrypted, so you should really not do this with rented space ("cloud storage").
|
||||
|
||||
|
||||
Alternately, you _could_ put it in another storage device (e.g. USB flash disk) in the same physical location.
|
||||
|
||||
To recover:
|
||||
|
||||
- Re-download the `backup` plugin and install Python 3 and the
|
||||
requirements of `backup`.
|
||||
- Re-download the `backup` plugin and install Python 3 and the requirements of `backup`.
|
||||
- `/path/to/backup-cli restore file:///path/to/nfs/mount ${LIGHTNINGDIR}`
|
||||
|
||||
If your backup destination is a network-mounted filesystem that is in a remote location, then even loss of all hardware in one location will allow you to still recover your Lightning funds.
|
||||
|
||||
However, if instead you are just replicating the database on another storage device in a single location, you remain vulnerable to disasters like fire or computer confiscation.
|
||||
|
||||
|
||||
## Filesystem Redundancy
|
||||
|
||||
|
||||
> 📘 Who should do this:
|
||||
>
|
||||
> Filesystem nerds, data hoarders, home labs, enterprise users.
|
||||
|
||||
|
||||
You can set up a RAID-1 with multiple storage devices, and point the `$LIGHTNINGDIR` to the RAID-1 setup. That way, failure of one storage device will still let you recover funds.
|
||||
|
||||
You can use a hardware RAID-1 setup, or just buy multiple commodity storage media you can add to your machine and use a software RAID, such as (not an exhaustive list!):
|
||||
|
||||
- `mdadm` to create a virtual volume which is the RAID combination of multiple physical media.
|
||||
- BTRFS RAID-1 or RAID-10, a filesystem built into Linux.
|
||||
- ZFS RAID-Z, a filesystem that cannot be legally distributed with the Linux kernel, but can be distributed in a BSD system, and can be installed on Linux with some extra effort, see
|
||||
[ZFSonLinux](https://zfsonlinux.org).
|
||||
- ZFS RAID-Z, a filesystem that cannot be legally distributed with the Linux kernel, but can be distributed in a BSD system, and can be installed on Linux with some extra effort, see [ZFSonLinux](https://zfsonlinux.org).
|
||||
|
||||
RAID-1 (whether by hardware, or software) like the above protects against failure of a single storage device, but does not protect you in case of certain disasters, such as fire or computer confiscation.
|
||||
|
||||
You can "just" use a pair of high-quality metal-casing USB flash devices (you need metal-casing since the devices will have a lot of small writes, which will cause a lot of heating, which needs to dissipate very fast, otherwise the flash device firmware will internally disconnect the flash device from your computer, reducing your reliability) in RAID-1, if you have enough USB ports.
|
||||
|
||||
|
||||
### Example: BTRFS on Linux
|
||||
|
||||
On a Linux system, one of the simpler things you can do would be to use BTRFS RAID-1 setup between a partition on your primary storage and a USB flash disk.
|
||||
|
||||
The below "should" work, but assumes you are comfortable with low-level Linux administration.
|
||||
The below "should" work, but assumes you are comfortable with low-level Linux administration.
|
||||
|
||||
If you are on a system that would make you cry if you break it, you **MUST** stop your Lightning node and back up all files before doing the below.
|
||||
|
||||
- Install `btrfs-progs` or `btrfs-tools` or equivalent.
|
||||
@@ -170,7 +158,7 @@ If you are on a system that would make you cry if you break it, you **MUST** sto
|
||||
- You may need to add `-f` if the USB flash disk is already formatted.
|
||||
- Create a mountpoint for the `btrfs` filesystem.
|
||||
- Create a `/etc/fstab` entry.
|
||||
- Use the `UUID` option instad of `/dev/sdXX` since the exact device letter can change across boots.
|
||||
- Use the `UUID` option instead of `/dev/sdXX` since the exact device letter can change across boots.
|
||||
- You can get the UUID by `lsblk -o NAME,UUID`. Specifying _either_ of the devices is sufficient.
|
||||
- Add `autodefrag` option, which tends to work better with SQLITE3 databases.
|
||||
- e.g. `UUID=${UUID} ${BTRFSMOUNTPOINT} btrfs defaults,autodefrag 0 0`
|
||||
@@ -182,9 +170,9 @@ If you are on a system that would make you cry if you break it, you **MUST** sto
|
||||
- `ln -s ${BTRFSMOUNTPOINT}/lightningdirname ${LIGHTNINGDIR}`.
|
||||
- Make sure the `$LIGHTNINGDIR` has the same structure as what you originally had.
|
||||
- Add `crontab` entries for `root` that perform regular `btrfs` maintenance tasks.
|
||||
- `0 0 * * * /usr/bin/btrfs balance start -dusage=50 -dlimit=2 -musage=50 -mlimit=4 ${BTRFSMOUNTPOINT}`
|
||||
- `0 0 * * * /usr/bin/btrfs balance start -dusage=50 -dlimit=2 -musage=50 -mlimit=4 ${BTRFSMOUNTPOINT}`
|
||||
This prevents BTRFS from running out of blocks even if it has unused space _within_ blocks, and is run at midnight everyday. You may need to change the path to the `btrfs` binary.
|
||||
- `0 0 * * 0 /usr/bin/btrfs scrub start -B -c 2 -n 4 ${BTRFSMOUNTPOINT}`
|
||||
- `0 0 * * 0 /usr/bin/btrfs scrub start -B -c 2 -n 4 ${BTRFSMOUNTPOINT}`
|
||||
This detects bit rot (i.e. bad sectors) and auto-heals the filesystem, and is run on Sundays at midnight.
|
||||
- Restart your Lightning node.
|
||||
|
||||
@@ -197,26 +185,24 @@ If one or the other device fails completely, shut down your computer, boot on a
|
||||
- Do **not** write to the degraded `btrfs` mount!
|
||||
- Start up a `lightningd` using the `hsm_secret` and `lightningd.sqlite3` and close all channels and move all funds to onchain cold storage you control, then set up a new Lightning node.
|
||||
|
||||
If the device that fails is the USB flash disk, you can replace it using BTRFS commands.
|
||||
If the device that fails is the USB flash disk, you can replace it using BTRFS commands.
|
||||
|
||||
You should probably stop your Lightning node while doing this.
|
||||
|
||||
- `btrfs replace start /dev/sdOLD /dev/sdNEW ${BTRFSMOUNTPOINT}`.
|
||||
- If `/dev/sdOLD` no longer even exists because the device is really really broken, use `btrfs filesystem show` to see the number after `devid` of the broken device, and use that number instead of `/dev/sdOLD`.
|
||||
- Monitor status with `btrfs replace status ${BTRFSMOUNTPOINT}`.
|
||||
|
||||
More sophisticated setups with more than two devices are possible. Take note that "RAID 1" in `btrfs` means "data is copied on up to two devices", meaning only up to one device can fail.
|
||||
You may be interested in `raid1c3` and `raid1c4` modes if you have three or four storage devices. BTRFS would probably work better if you were purchasing an entire set
|
||||
of new storage devices to set up a new node.
|
||||
More sophisticated setups with more than two devices are possible. Take note that "RAID 1" in `btrfs` means "data is copied on up to two devices", meaning only up to one device can fail.
|
||||
|
||||
You may be interested in `raid1c3` and `raid1c4` modes if you have three or four storage devices. BTRFS would probably work better if you were purchasing an entire set of new storage devices to set up a new node.
|
||||
|
||||
## PostgreSQL Cluster
|
||||
|
||||
|
||||
> 📘 Who should do this:
|
||||
>
|
||||
> Enterprise users, whales.
|
||||
|
||||
|
||||
`lightningd` may also be compiled with PostgreSQL support.
|
||||
|
||||
PostgreSQL is generally faster than SQLITE3, and also supports running a PostgreSQL cluster to be used by `lightningd`, with automatic replication and failover in case an entire node of the PostgreSQL cluster fails.
|
||||
@@ -224,13 +210,14 @@ PostgreSQL is generally faster than SQLITE3, and also supports running a Postgre
|
||||
Setting this up, however, is more involved.
|
||||
|
||||
By default, `lightningd` compiles with PostgreSQL support **only** if it finds `libpq` installed when you `./configure`. To enable it, you have to install a developer version of `libpq`. On most Debian-derived systems that would be `libpq-dev`. To verify you have it properly installed on your system, check if the following command gives you a path:
|
||||
|
||||
```shell
|
||||
pg_config --includedir
|
||||
```
|
||||
|
||||
Versioning may also matter to you.
|
||||
For example, Debian Stable ("buster") as of late 2020 provides PostgreSQL 11.9 for the `libpq-dev` package, but Ubuntu LTS ("focal") of 2020 provides PostgreSQL 12.5.
|
||||
Versioning may also matter to you.
|
||||
|
||||
For example, Debian Stable ("buster") as of late 2020 provides PostgreSQL 11.9 for the `libpq-dev` package, but Ubuntu LTS ("focal") of 2020 provides PostgreSQL 12.5.
|
||||
|
||||
Debian Testing ("bullseye") uses PostgreSQL 13.0 as of this writing. PostgreSQL 12 had a non-trivial change in the way the restore operation is done for replication.
|
||||
|
||||
You should use the same PostgreSQL version of `libpq-dev` as what you run on your cluster, which probably means running the same distribution on your cluster.
|
||||
@@ -254,40 +241,34 @@ You then have to compile `lightningd` with PostgreSQL support.
|
||||
|
||||
If you were not using PostgreSQL before but have compiled and used `lightningd` on your system, the resulting `lightningd` will still continue supporting and using your current SQLITE3 database; it just gains the option to use a PostgreSQL database as well.
|
||||
|
||||
If you just want to use PostgreSQL without using a cluster (for example, as an initial test without risking any significant funds), then after setting up a PostgreSQL database, you just need to add
|
||||
`--wallet=postgres://${USER}:${PASSWORD}@${HOST}:${PORT}/${DB}` to your `lightningd` config or invocation.
|
||||
If you just want to use PostgreSQL without using a cluster (for example, as an initial test without risking any significant funds), then after setting up a PostgreSQL database, you just need to add `--wallet=postgres://${USER}:${PASSWORD}@${HOST}:${PORT}/${DB}` to your `lightningd` config or invocation.
|
||||
|
||||
To set up a cluster for a brand new node, follow this (external) [guide by @gabridome](https://github.com/gabridome/docs/blob/master/c-lightning_with_postgresql_reliability.md)
|
||||
|
||||
The above guide assumes you are setting up a new node from scratch. It is also specific to PostgreSQL 12, and setting up for other versions **will** have differences; read the PostgreSQL manuals linked above.
|
||||
|
||||
|
||||
> 🚧
|
||||
>
|
||||
> If you want to continue a node that started using an SQLITE3 database, note that we do not support this. You should set up a new PostgreSQL node, move funds from the SQLITE3 node to the PostgreSQL node, then shut down the SQLITE3 node permanently.
|
||||
|
||||
There are also more ways to set up PostgreSQL replication.
|
||||
|
||||
There are also more ways to set up PostgreSQL replication.
|
||||
In general, you should use [synchronous replication](https://www.postgresql.org/docs/13/warm-standby.html#SYNCHRONOUS-REPLICATION), since `lightningd` assumes that once a transaction is committed, it is saved in all permanent storage. This can be difficult to create remote replicas due to the latency.
|
||||
|
||||
|
||||
## SQLite Litestream Replication
|
||||
|
||||
|
||||
> 🚧
|
||||
>
|
||||
> Previous versions of this document recommended this technique, but we no longer do so.
|
||||
> According to [issue 4857](https://github.com/ElementsProject/lightning/issues/4857), even with a 60-second timeout that we added in 0.10.2, this leads to
|
||||
constant crashing of `lightningd` in some situations. This section will be removed completely six months after 0.10.3. Consider using `--wallet=sqlite3://${main}:${backup}` above instead.
|
||||
> Previous versions of this document recommended this technique, but we no longer do so.
|
||||
> According to [issue 4857](https://github.com/ElementsProject/lightning/issues/4857), even with a 60-second timeout that we added in 0.10.2, this leads to constant crashing of `lightningd` in some situations. This section will be removed completely six months after 0.10.3. Consider using `--wallet=sqlite3://${main}:${backup}` above instead.
|
||||
|
||||
|
||||
One of the simpler things on any system is to use Litestream to replicate the SQLite database. It continuously streams SQLite changes to file or external storage - the cloud storage option should not be used.
|
||||
One of the simpler things on any system is to use Litestream to replicate the SQLite database. It continuously streams SQLite changes to file or external storage - the cloud storage option should not be used.
|
||||
|
||||
Backups/replication should not be on the same disk as the original SQLite DB.
|
||||
|
||||
You need to enable WAL mode on your database.
|
||||
To do so, first stop `lightningd`, then:
|
||||
You need to enable WAL mode on your database.
|
||||
|
||||
To do so, first stop `lightningd`, then:
|
||||
```shell
|
||||
$ sqlite3 lightningd.sqlite3
|
||||
sqlite3> PRAGMA journal_mode = WAL;
|
||||
@@ -297,7 +278,6 @@ sqlite3> .quit
|
||||
Then just restart `lightningd`.
|
||||
|
||||
/etc/litestream.yml :
|
||||
|
||||
```shell
|
||||
dbs:
|
||||
- path: /home/bitcoin/.lightning/bitcoin/lightningd.sqlite3
|
||||
@@ -305,35 +285,30 @@ dbs:
|
||||
- path: /media/storage/lightning_backup
|
||||
```
|
||||
|
||||
and start the service using systemctl:
|
||||
|
||||
and start the service using systemctl:
|
||||
```shell
|
||||
$ sudo systemctl start litestream
|
||||
```
|
||||
|
||||
Restore:
|
||||
|
||||
```shell
|
||||
$ litestream restore -o /media/storage/lightning_backup /home/bitcoin/restore_lightningd.sqlite3
|
||||
```
|
||||
|
||||
Because Litestream only copies small changes and not the entire database (holding a read lock on the file while doing so), the 60-second timeout on locking should not be reached unless something has made your backup medium very very slow.
|
||||
|
||||
Litestream has its own timer, so there is a tiny (but non-negligible) probability that `lightningd` updates the
|
||||
database, then irrevocably commits to the update by sending revocation keys to the counterparty, and _then_ your main storage media crashes before Litestream can replicate the update.
|
||||
Litestream has its own timer, so there is a tiny (but non-negligible) probability that `lightningd` updates the database, then irrevocably commits to the update by sending revocation keys to the counterparty, and _then_ your main storage media crashes before Litestream can replicate the update.
|
||||
|
||||
Treat this as a superior version of "Database File Backups" section below and prefer recovering via other backup methods first.
|
||||
|
||||
|
||||
## Database File Backups
|
||||
|
||||
|
||||
> 📘 Who should do this:
|
||||
>
|
||||
> Those who already have at least one of the other backup methods, those who are #reckless.
|
||||
|
||||
This is the least desirable backup strategy, as it _can_ lead to loss of all in-channel funds if you use it.
|
||||
|
||||
This is the least desirable backup strategy, as it _can_ lead to loss of all in-channel funds if you use it.
|
||||
However, having _no_ backup strategy at all _will_ lead to loss of all in-channel funds, so this is still better than nothing.
|
||||
|
||||
This backup method is undesirable, since it cannot recover the following channels:
|
||||
@@ -357,7 +332,6 @@ Even if you have one of the better options above, you might still want to do thi
|
||||
|
||||
Again, this strategy can lead to only **_partial_** recovery of funds, or even to complete failure to recover, so use the other methods first to recover!
|
||||
|
||||
|
||||
### Offline Backup
|
||||
|
||||
While `lightningd` is not running, just copy the `lightningd.sqlite3` file in the `$LIGHTNINGDIR` on backup media somewhere.
|
||||
@@ -366,25 +340,27 @@ To recover, just copy the backed up `lightningd.sqlite3` into your new `$LIGHTNI
|
||||
|
||||
You can also use any automated backup system as long as it includes the `lightningd.sqlite3` file (and optionally `hsm_secret`, but note that as a secret key, thieves getting a copy of your backups may allow them to steal your funds, even in-channel funds) and as long as it copies the file while `lightningd` is not running.
|
||||
|
||||
|
||||
### Backing Up While `lightningd` Is Running
|
||||
|
||||
Since `sqlite3` will be writing to the file while `lightningd` is running, `cp`ing the `lightningd.sqlite3` file while `lightningd` is running may result in the file not being copied properly if `sqlite3` happens to be committing database transactions at that time, potentially leading to a corrupted backup file that cannot be recovered from.
|
||||
|
||||
You have to stop `lightningd` before copying the database to backup in order to ensure that backup files are not corrupted, and in particular, wait for the `lightningd` process to exit.
|
||||
You have to stop `lightningd` before copying the database to backup in order to ensure that backup files are not corrupted, and in particular, wait for the `lightningd` process to exit.
|
||||
|
||||
Obviously, this is disruptive to node operations, so you might prefer to just perform the `cp` even if the backup potentially is corrupted. As long as you maintain multiple backups sampled at different times, this may be more acceptable than stopping and restarting `lightningd`; the corruption only exists in the backup, not in the original file.
|
||||
|
||||
If the filesystem or volume manager containing `$LIGHTNINGDIR` has a snapshot facility, you can take a snapshot of the filesystem, then mount the snapshot, copy `lightningd.sqlite3`, unmount the snapshot, and then delete the snapshot.
|
||||
Similarly, if the filesystem supports a "reflink" feature, such as `cp -c` on an APFS on MacOS, or `cp --reflink=always` on an XFS or BTRFS on Linux, you can also use that, then copy the reflinked copy to a different storage medium; this is equivalent to a snapshot of a single file.
|
||||
If the filesystem or volume manager containing `$LIGHTNINGDIR` has a snapshot facility, you can take a snapshot of the filesystem, then mount the snapshot, copy `lightningd.sqlite3`, unmount the snapshot, and then delete the snapshot.
|
||||
|
||||
Similarly, if the filesystem supports a "reflink" feature, such as `cp -c` on an APFS on MacOS, or `cp --reflink=always` on an XFS or BTRFS on Linux, you can also use that, then copy the reflinked copy to a different storage medium; this is equivalent to a snapshot of a single file.
|
||||
|
||||
This _reduces_ but does not _eliminate_ this race condition, so you should still maintain multiple backups.
|
||||
|
||||
You can additionally perform a check of the backup by this command:
|
||||
|
||||
```shell
|
||||
echo 'PRAGMA integrity_check;' | sqlite3 ${BACKUPFILE}
|
||||
```
|
||||
|
||||
This will result in the string `ok` being printed if the backup is **likely** not corrupted.
|
||||
This will result in the string `ok` being printed if the backup is **likely** not corrupted.
|
||||
|
||||
If the result is anything else than `ok`, the backup is definitely corrupted and you should make another copy.
|
||||
|
||||
In order to make a proper uncorrupted backup of the SQLITE3 file while `lightningd` is running, we would need to have `lightningd` perform the backup itself, which, as of the version at the time of this writing, is not yet implemented.
|
||||
|
||||
@@ -11,26 +11,24 @@ privacy:
|
||||
#### Regtest (local, fast-start) option
|
||||
|
||||
If you want to experiment with `lightningd`, there's a script to set up a `bitcoind` regtest test network of two local lightning nodes, which provides a convenient `start_ln` helper. See the notes at the top of the `startup_regtest.sh` file for details on how to use it.
|
||||
|
||||
```bash
|
||||
. contrib/startup_regtest.sh
|
||||
```
|
||||
|
||||
#### Mainnet Option
|
||||
|
||||
To test with real bitcoin, you will need to have a local `bitcoind` node running:
|
||||
|
||||
To test with real bitcoin, you will need to have a local `bitcoind` node running:
|
||||
```bash
|
||||
bitcoind -daemon
|
||||
```
|
||||
|
||||
Wait until `bitcoind` has synchronized with the network.
|
||||
|
||||
Make sure that you do not have `walletbroadcast=0` in your `~/.bitcoin/bitcoin.conf`, or you may run into trouble.
|
||||
Notice that running `lightningd` against a pruned node may cause some issues if not managed carefully, see [pruning](doc:bitcoin-core##using-a-pruned-bitcoin-core-node) for more information.
|
||||
Make sure that you do not have `walletbroadcast=0` in your `~/.bitcoin/bitcoin.conf`, or you may run into trouble.
|
||||
|
||||
Notice that running `lightningd` against a pruned node may cause some issues if not managed carefully, see [pruning](doc:bitcoin-core#using-a-pruned-bitcoin-core-node) for more information.
|
||||
|
||||
You can start `lightningd` with the following command:
|
||||
|
||||
```bash
|
||||
lightningd --network=bitcoin --log-level=debug
|
||||
```
|
||||
@@ -57,6 +55,6 @@ Useful commands:
|
||||
|
||||
Once you've started for the first time, there's a script called `contrib/bootstrap-node.sh` which will connect you to other nodes on the lightning network.
|
||||
|
||||
There are also numerous plugins available for Core Lightning which add capabilities: see the [Plugins](doc:plugins) guide, and check out the plugin collection at: <https://github.com/lightningd/plugins>.
|
||||
There are also numerous plugins available for Core Lightning which add capabilities: see the [Plugins](doc:plugins) guide, and check out the plugin collection at: https://github.com/lightningd/plugins.
|
||||
|
||||
For a less reckless experience, you can encrypt the HD wallet seed: see [HD wallet encryption](doc:backup-and-recovery#hsm-secret-backup).
|
||||
|
||||
@@ -12,7 +12,7 @@ The CLN project has a multitude of interfaces, most of which are generated from
|
||||
1. addition of FD passing semantics to allow establishing a new connection between daemons (communication uses [socketpair](https://man7.org/linux/man-pages/man2/socketpair.2.html), so no `connect`)
|
||||
2. change the message length prefix from `u16` to `u32`, allowing for messages larger than 65Kb. The CSV files are with the respective sub-daemon and also use [generate-wire.py](https://github.com/ElementsProject/lightning/blob/master/tools/generate-wire.py) to generate encoding, decoding and printing functions
|
||||
|
||||
- We describe the JSON-RPC using [JSON Schema](https://json-schema.org/) in the [`doc/schemas`](https://github.com/ElementsProject/lightning/tree/master/doc/schemas) directory. Each method has a `lightning-*.json` for request and response. During tests the `pytest` target will verify responses, however the JSON-RPC methods are _not_ generated (yet?). We do generate various client stubs for languages, using the `msggen`[msggen] tool. More on the generated stubs and utilities below.
|
||||
- We describe the JSON-RPC using [JSON Schema](https://json-schema.org/) in the [`doc/schemas`](https://github.com/ElementsProject/lightning/tree/master/doc/schemas) directory. Each method has a `lightning-*.json` for request and response. During tests the `pytest` target will verify responses, however the JSON-RPC methods are _not_ generated (yet?). We do generate various client stubs for languages, using the `msggen` tool. More on the generated stubs and utilities below.
|
||||
|
||||
## Man pages
|
||||
|
||||
@@ -26,21 +26,9 @@ The manpages are generated from the JSON schemas using the [`fromschema`](https:
|
||||
|
||||
`msggen` is used to generate JSON-RPC client stubs, and converters between in-memory formats and the JSON format. In addition, by chaining some of these we can expose a [grpc](https://grpc.io/) interface that matches the JSON-RPC interface. This conversion chain is implemented in the [grpc-plugin](https://github.com/ElementsProject/lightning/tree/master/plugins/grpc-plugin).
|
||||
|
||||
[block:image]
|
||||
{
|
||||
"images": [
|
||||
{
|
||||
"image": [
|
||||
"https://files.readme.io/8777cc4-image.png",
|
||||
null,
|
||||
null
|
||||
],
|
||||
"align": "center",
|
||||
"caption": "Artifacts generated from the JSON Schemas using `msggen`"
|
||||
}
|
||||
]
|
||||
}
|
||||
[/block]
|
||||

|
||||
|
||||
*Artifacts generated from the JSON Schemas using `msggen`*
|
||||
|
||||
### `cln-rpc`
|
||||
|
||||
@@ -58,4 +46,4 @@ The `cln-grpc` crate is mostly used to provide the primitives to build the `grpc
|
||||
- Next it generates the `convert.rs` file which is used to convert the structs for in-memory representation from `cln-rpc` into the corresponding protobuf structs.
|
||||
- Finally `msggen` generates the `server.rs` file which can be bound to a grpc endpoint listening for incoming grpc requests, and it will convert the request and forward it to the JSON-RPC. Upon receiving the response it gets converted back into a grpc response and sent back.
|
||||
|
||||

|
||||

|
||||
|
||||
@@ -14,10 +14,10 @@ To read the code, you should start from [lightningd.c](https://github.com/Elemen
|
||||
|
||||
Here's a list of parts, with notes:
|
||||
|
||||
- ccan - useful routines from <http://ccodearchive.net>
|
||||
- ccan - useful routines from http://ccodearchive.net
|
||||
- Use make update-ccan to update it.
|
||||
- Use make update-ccan CCAN_NEW="mod1 mod2..." to add modules
|
||||
- Do not edit this! If you want a wrapper, add one to common/utils.h.
|
||||
- Do not edit this! If you want a wrapper, add one to common/utils.h.
|
||||
|
||||
- bitcoin/ - bitcoin script, signature and transaction routines.
|
||||
- Not a complete set, but enough for our purposes.
|
||||
@@ -69,18 +69,15 @@ Here's a list of parts, with notes:
|
||||
|
||||
## Database
|
||||
|
||||
Core Lightning state is persisted in `lightning-dir`. It is a sqlite database stored in the `lightningd.sqlite3` file, typically under `~/.lightning/<network>/`.
|
||||
You can run queries against this file like so:
|
||||
Core Lightning state is persisted in `lightning-dir`. It is a sqlite database stored in the `lightningd.sqlite3` file, typically under `~/.lightning/<network>/`.
|
||||
|
||||
You can run queries against this file like so:
|
||||
```shell
|
||||
$ sqlite3 ~/.lightning/bitcoin/lightningd.sqlite3 \
|
||||
"SELECT HEX(prev_out_tx), prev_out_index, status FROM outputs"
|
||||
```
|
||||
|
||||
|
||||
|
||||
Or you can launch into the sqlite3 repl and check things out from there:
|
||||
|
||||
```shell
|
||||
$ sqlite3 ~/.lightning/bitcoin/lightningd.sqlite3
|
||||
SQLite version 3.21.0 2017-10-24 18:55:49
|
||||
@@ -93,8 +90,6 @@ sqlite> .schema outputs
|
||||
...
|
||||
```
|
||||
|
||||
|
||||
|
||||
Some data is stored as raw bytes, use `HEX(column)` to pretty print these.
|
||||
|
||||
Make sure that clightning is not running when you query the database, as some queries may lock the database and cause crashes.
|
||||
@@ -102,7 +97,6 @@ Make sure that clightning is not running when you query the database, as some qu
|
||||
#### Common variables
|
||||
|
||||
Table `vars` contains global variables used by lightning node.
|
||||
|
||||
```shell
|
||||
$ sqlite3 ~/.lightning/bitcoin/lightningd.sqlite3
|
||||
SQLite version 3.21.0 2017-10-24 18:55:49
|
||||
@@ -115,34 +109,27 @@ bip32_max_index|4
|
||||
...
|
||||
```
|
||||
|
||||
|
||||
|
||||
Variables:
|
||||
|
||||
- `next_pay_index` next resolved invoice counter that will get assigned.
|
||||
- `bip32_max_index` last wallet derivation counter.
|
||||
|
||||
Note: Each time `newaddr` command is called, `bip32_max_index` counter is increased to the last derivation index. Each address generated after `bip32_max_index` is not included as
|
||||
lightning funds.
|
||||
Note: Each time `newaddr` command is called, `bip32_max_index` counter is increased to the last derivation index. Each address generated after `bip32_max_index` is not included as lightning funds.
|
||||
|
||||
# gossip_store: Direct Access To Lightning Gossip
|
||||
|
||||
The `lightning_gossipd` daemon stores the gossip messages, along with some internal data, in a file called the "gossip_store". Various plugins and daemons access this (in a read-only manner), and the format is documented here.
|
||||
The `lightning_gossipd` daemon stores the gossip messages, along with some internal data, in a file called the "gossip_store". Various plugins and daemons access this (in a read-only manner), and the format is documented here.
|
||||
|
||||
## The File Header
|
||||
|
||||
```
|
||||
u8 version;
|
||||
```
|
||||
|
||||
|
||||
|
||||
The gossmap header consists of one byte. The top 3 bits are the major version: if these are not all zero, you need to re-read this (updated) document to see what changed. The lower 5 bits are the minor version, which won't worry you: currently they will be 11.
|
||||
The gossmap header consists of one byte. The top 3 bits are the major version: if these are not all zero, you need to re-read this (updated) document to see what changed. The lower 5 bits are the minor version, which won't worry you: currently they will be 11.
|
||||
|
||||
After the file header comes a number of records.
|
||||
|
||||
## The Record Header
|
||||
|
||||
```
|
||||
be16 flags;
|
||||
be16 len;
|
||||
@@ -150,20 +137,15 @@ be32 crc;
|
||||
be32 timestamp;
|
||||
```
|
||||
|
||||
|
||||
|
||||
Each record consists of a header and a message. The header is big-endian, containing flags, the length (of the following body), the crc32c (of the following message, starting with the timestamp field in the header) and a timestamp extracted from certain messages (zero where not relevant, but ignore it in those cases).
|
||||
Each record consists of a header and a message. The header is big-endian, containing flags, the length (of the following body), the crc32c (of the following message, starting with the timestamp field in the header) and a timestamp extracted from certain messages (zero where not relevant, but ignore it in those cases).
|
||||
|
||||
The flags currently defined are:
|
||||
|
||||
```
|
||||
#define DELETED 0x8000
|
||||
#define PUSH 0x4000
|
||||
#define DYING 0x0800
|
||||
```
|
||||
|
||||
|
||||
|
||||
Deleted fields should be ignored: on restart, they will be removed as the gossip_store is rewritten.
|
||||
|
||||
The push flag indicates gossip which is generated locally: this is important for gossip timestamp filtering, where peers request gossip and we always send our own gossip messages even if the timestamp wasn't within their request.
|
||||
@@ -174,15 +156,15 @@ Other flags should be ignored.
|
||||
|
||||
## The Message
|
||||
|
||||
Each messages consists of a 16-bit big-endian "type" field (for efficiency, an implementation may read this along with the header), and optional data. Some messages are defined by the BOLT 7 gossip protocol, others are for internal use. Unknown message types should be skipped over.
|
||||
Each messages consists of a 16-bit big-endian "type" field (for efficiency, an implementation may read this along with the header), and optional data. Some messages are defined by the BOLT 7 gossip protocol, others are for internal use. Unknown message types should be skipped over.
|
||||
|
||||
### BOLT 7 Messages
|
||||
|
||||
These are the messages which gossipd has validated, and ensured are in order.
|
||||
|
||||
- `channel_announcement` (256): a complete, validated channel announcement. This will always come before any `channel_update` which refers to it, or `node_announcement` which refers to a node.
|
||||
- `channel_update` (258): a complete, validated channel update. Note that you can see multiple of these (old ones will be deleted as they are replaced though).
|
||||
- `node_announcement` (257): a complete, validated node announcement. Note that you can also see multiple of these (old ones will be deleted as they are replaced).
|
||||
- `channel_announcement` (256): a complete, validated channel announcement. This will always come before any `channel_update` which refers to it, or `node_announcement` which refers to a node.
|
||||
- `channel_update` (258): a complete, validated channel update. Note that you can see multiple of these (old ones will be deleted as they are replaced though).
|
||||
- `node_announcement` (257): a complete, validated node announcement. Note that you can also see multiple of these (old ones will be deleted as they are replaced).
|
||||
|
||||
### Internal Gossip Daemon Messages
|
||||
|
||||
@@ -209,26 +191,26 @@ This contains a private `channel_update` (i.e. for a channel described by `gossi
|
||||
- `gossip_store_delete_chan` (4103)
|
||||
- `scid`: u64
|
||||
|
||||
This is added when a channel is deleted. You won't often see this if you're reading the file once (as the channel record header will have been marked `deleted` first), but useful if you are polling the file for updates.
|
||||
This is added when a channel is deleted. You won't often see this if you're reading the file once (as the channel record header will have been marked `deleted` first), but useful if you are polling the file for updates.
|
||||
|
||||
- `gossip_store_ended` (4105)
|
||||
- `equivalent_offset`: u64
|
||||
|
||||
This is only ever added as the final entry in the gossip_store. It means the file has been deleted (usually because lightningd has been restarted), and you should re-open it. As an optimization, the `equivalent_offset` in the new file reflects the point at which the new gossip_store is equivalent to this one (with deleted records removed). However, if lightningd has been restarted multiple times it is possible that this offset is not valid, so it's really only useful if you're actively monitoring the file.
|
||||
This is only ever added as the final entry in the gossip_store. It means the file has been deleted (usually because lightningd has been restarted), and you should re-open it. As an optimization, the `equivalent_offset` in the new file reflects the point at which the new gossip_store is equivalent to this one (with deleted records removed). However, if lightningd has been restarted multiple times it is possible that this offset is not valid, so it's really only useful if you're actively monitoring the file.
|
||||
|
||||
- `gossip_store_chan_dying` (4106)
|
||||
- `scid`: u64
|
||||
- `blockheight`: u32
|
||||
|
||||
This is placed in the gossip_store file when a funding transaction is spent. `blockheight` is set to 12 blocks beyond the block containing the spend: at this point, gossipd will delete the channel.
|
||||
This is placed in the gossip_store file when a funding transaction is spent. `blockheight` is set to 12 blocks beyond the block containing the spend: at this point, gossipd will delete the channel.
|
||||
|
||||
## Using the Gossip Store File
|
||||
|
||||
- Always check the major version number! We will increment it if the format changes in a way that breaks readers.
|
||||
- Always check the major version number! We will increment it if the format changes in a way that breaks readers.
|
||||
- Ignore unknown flags in the header.
|
||||
- Ignore message types you don't know.
|
||||
- You don't need to check the messages, as they have been validated.
|
||||
- It is possible to see a partially-written record at the end. Ignore it.
|
||||
- It is possible to see a partially-written record at the end. Ignore it.
|
||||
|
||||
If you are keeping the file open to watch for changes:
|
||||
|
||||
|
||||
@@ -9,14 +9,12 @@ privacy:
|
||||
## Build and Development
|
||||
|
||||
Install the following dependencies for best results:
|
||||
|
||||
```shell
|
||||
sudo apt update
|
||||
sudo apt install jq valgrind cppcheck shellcheck libsecp256k1-dev libpq-dev
|
||||
```
|
||||
|
||||
Re-run `configure` and build using `make`:
|
||||
|
||||
```shell
|
||||
./configure
|
||||
make -j$(nproc)
|
||||
@@ -24,12 +22,11 @@ make -j$(nproc)
|
||||
|
||||
## Debugging
|
||||
|
||||
There are various development options enabled by running with `--developer`. You can log console messages with log_info() in lightningd and status_debug() in other subdaemons.
|
||||
There are various development options enabled by running with `--developer`. You can log console messages with log_info() in lightningd and status_debug() in other subdaemons.
|
||||
|
||||
You can debug crashing subdaemons with the argument `--dev-debugger=channeld`, where `channeld` is the subdaemon name. It will run `gnome-terminal` by default with a gdb attached to the subdaemon when it starts. You can change the terminal used by setting the `DEBUG_TERM` environment variable, such as `DEBUG_TERM="xterm -e"` or `DEBUG_TERM="konsole -e"`.
|
||||
|
||||
It will also print out (to stderr) the gdb command for manual connection. The subdaemon will be stopped (it sends itself a `SIGSTOP`); you'll need to `continue` in gdb.
|
||||
You can debug crashing subdaemons with the argument `--dev-debugger=channeld`, where `channeld` is the subdaemon name. It will run `gnome-terminal` by default with a gdb attached to the subdaemon when it starts. You can change the terminal used by setting the `DEBUG_TERM` environment variable, such as `DEBUG_TERM="xterm -e"` or `DEBUG_TERM="konsole -e"`.
|
||||
|
||||
It will also print out (to stderr) the gdb command for manual connection. The subdaemon will be stopped (it sends itself a `SIGSTOP`); you'll need to `continue` in gdb.
|
||||
```shell
|
||||
./configure
|
||||
make -j$(nproc)
|
||||
@@ -38,7 +35,6 @@ make -j$(nproc)
|
||||
## Building Python Packages
|
||||
|
||||
Core Lightning includes several Python packages in the workspace that can be built individually or all at once:
|
||||
|
||||
```shell
|
||||
# Build all Python packages
|
||||
make pyln-build-all
|
||||
@@ -60,7 +56,6 @@ make pyln-build-wss-proxy
|
||||
```
|
||||
|
||||
You can also build packages directly with uv:
|
||||
|
||||
```shell
|
||||
uv build contrib/pyln-client/
|
||||
uv build contrib/pyln-proto/
|
||||
@@ -74,7 +69,7 @@ All of code for marshalling/unmarshalling BOLT protocol messages is generated di
|
||||
|
||||
An updated version of the NCC source code analysis tool is available at
|
||||
|
||||
<https://github.com/bitonic-cjp/ncc>
|
||||
https://github.com/bitonic-cjp/ncc
|
||||
|
||||
It can be used to analyze the lightningd source code by running `make clean && make ncc`. The output (which is built in parallel with the binaries) is stored in .nccout files. You can browse it, for instance, with a command like `nccnav lightningd/lightningd.nccout`.
|
||||
|
||||
@@ -83,7 +78,6 @@ It can be used to analyze the lightningd source code by running `make clean && m
|
||||
Code coverage can be measured using Clang's source-based instrumentation.
|
||||
|
||||
First, build with the instrumentation enabled:
|
||||
|
||||
```shell
|
||||
make clean
|
||||
./configure --enable-coverage CC=clang
|
||||
@@ -91,13 +85,11 @@ make -j$(nproc)
|
||||
```
|
||||
|
||||
Then run the test for which you want to measure coverage. By default, the raw coverage profile will be written to `./default.profraw`. You can change the output file by setting `LLVM_PROFILE_FILE`:
|
||||
|
||||
```shell
|
||||
LLVM_PROFILE_FILE="full_channel.profraw" ./channeld/test/run-full_channel
|
||||
```
|
||||
|
||||
Finally, generate an HTML report from the profile. We have a script to make this easier:
|
||||
|
||||
```shell
|
||||
./contrib/clang-coverage-report.sh channeld/test/run-full_channel \
|
||||
full_channel.profraw full_channel.html
|
||||
@@ -111,87 +103,65 @@ For more advanced report generation options, see the [Clang coverage documentati
|
||||
There are a few subtleties you should be aware of as you modify deeper parts of the code:
|
||||
|
||||
- `ccan/structeq`'s STRUCTEQ_DEF will define safe comparison function `foo_eq()` for struct `foo`, failing the build if the structure has implied padding.
|
||||
- `command_success`, `command_fail`, and `command_fail_detailed` will free the `cmd` you pass in.
|
||||
This also means that if you `tal`-allocated anything from the `cmd`, they will also get freed at those points and will no longer be accessible afterwards.
|
||||
- `command_success`, `command_fail`, and `command_fail_detailed` will free the `cmd` you pass in. This also means that if you `tal`-allocated anything from the `cmd`, they will also get freed at those points and will no longer be accessible afterwards.
|
||||
- When making a structure part of a list, you will instance a `struct list_node`. This has to be the _first_ field of the structure, or else `dev-memleak` command will think your structure has leaked.
|
||||
|
||||
## Protocol Modifications
|
||||
|
||||
The source tree contains CSV files extracted from the v1.0 BOLT specifications (wire/extracted_peer_wire_csv and wire/extracted_onion_wire_csv). You can regenerate these by first deleting the local copy(if any) at directory .tmp.bolts, setting `BOLTDIR` and `BOLTVERSION` appropriately, and finally running `make
|
||||
extract-bolt-csv`. By default the bolts will be retrieved from the directory `../bolts` and a recent git version.
|
||||
The source tree contains CSV files extracted from the v1.0 BOLT specifications (wire/extracted_peer_wire_csv and wire/extracted_onion_wire_csv). You can regenerate these by first deleting the local copy(if any) at directory .tmp.bolts, setting `BOLTDIR` and `BOLTVERSION` appropriately, and finally running `make extract-bolt-csv`. By default the bolts will be retrieved from the directory `../bolts` and a recent git version.
|
||||
|
||||
e.g., `make extract-bolt-csv BOLTDIR=../bolts BOLTVERSION=ee76043271f79f45b3392e629fd35e47f1268dc8`
|
||||
|
||||
|
||||
## Pushing Up Changes to PR Branches
|
||||
|
||||
If you want to pull down and run changes to a PR branch, you can use the convenient
|
||||
pr/<pr#> branch tags to do this. First you'll need to make sure you have the following
|
||||
in your `.github/config`.
|
||||
|
||||
If you want to pull down and run changes to a PR branch, you can use the convenient pr/pr-number branch tags to do this. First you'll need to make sure you have the following in your `.github/config`.
|
||||
```
|
||||
[remote "origin"]
|
||||
fetch = +refs/pull/*/head:refs/remotes/origin/pr/*
|
||||
```
|
||||
|
||||
Once that's added, run `git fetch` and then you should be able to check out PRs by their number.
|
||||
|
||||
```shell
|
||||
git checkout pr/<pr#>
|
||||
git checkout pr/pr-number
|
||||
```
|
||||
|
||||
If you make changes, here's how to push them back to the original PR originator's
|
||||
branch. NOTE: This assumes they have turned on "allow maintainers to push changes".
|
||||
|
||||
First you'll want to make sure that their remote is added to your local git. You
|
||||
can do this with `remote -v` which lists all current remotes.
|
||||
If you make changes, here's how to push them back to the original PR originator's branch. NOTE: This assumes they have turned on "allow maintainers to push changes".
|
||||
|
||||
First you'll want to make sure that their remote is added to your local git. You can do this with `remote -v` which lists all current remotes.
|
||||
```shell
|
||||
git remote -v
|
||||
```
|
||||
|
||||
If it's not there, you can add it with
|
||||
|
||||
```shell
|
||||
git remote add <name> <repo_url>
|
||||
```
|
||||
|
||||
For example, here's how you'd add `niftynei`'s git lightning clone.
|
||||
|
||||
```shell
|
||||
git remote add niftynei git@github.com:niftynei/lightning.git
|
||||
```
|
||||
|
||||
To push changes to the remote, from a `pr/<pr#>` branch, you'll need to
|
||||
know the name of the branch on their repo that made the PR originally. You
|
||||
can find this on the PR on github.
|
||||
|
||||
You'll also need to make sure you've got a ref to that branch from their repo;
|
||||
you can do this by fetching the latest branches for them with the following command.
|
||||
To push changes to the remote, from a `pr/pr-number` branch, you'll need to know the name of the branch on their repo that made the PR originally. You can find this on the PR on github.
|
||||
|
||||
You'll also need to make sure you've got a ref to that branch from their repo; you can do this by fetching the latest branches for them with the following command.
|
||||
```shell
|
||||
git fetch niftynei
|
||||
```
|
||||
|
||||
You may need to fetch their latest set of commits before pushing yours, you can do
|
||||
this with
|
||||
|
||||
You may need to fetch their latest set of commits before pushing yours, you can do this with
|
||||
```shell
|
||||
git pull -r niftynei <pr-branch/name>
|
||||
```
|
||||
|
||||
Finally, you're good to go in terms of pushing up the latest commits that you've made
|
||||
(or changed) on their branch.
|
||||
|
||||
Finally, you're good to go in terms of pushing up the latest commits that you've made (or changed) on their branch.
|
||||
```shell
|
||||
git push <name> HEAD:<pr-branch/name>
|
||||
```
|
||||
|
||||
For example, here's how you'd push changes to a branch named "nifty/add-remote-to-readme".
|
||||
|
||||
```shell
|
||||
git push niftynei HEAD:nifty/add-remote-to-readme
|
||||
```
|
||||
|
||||
If that fails, go check with the PR submitter that they have the ability to push changes
|
||||
to their PR turned on. Also make sure you're on the right branch before you push!
|
||||
If that fails, go check with the PR submitter that they have the ability to push changes to their PR turned on. Also make sure you're on the right branch before you push!
|
||||
|
||||
@@ -14,14 +14,12 @@ Here's a checklist for the release process.
|
||||
2. Look through outstanding issues, to identify any problems that might be necessary to fixup before the release. Good candidates are reports of the project not building on different architectures or crashes.
|
||||
3. Identify a good lead for each outstanding issue, and ask them about a fix timeline.
|
||||
4. Create a milestone for the _next_ release on Github, and go though open issues and PRs and mark accordingly.
|
||||
5. Ask (via email) the most significant contributor who has not already named a release to name the release (use
|
||||
`devtools/credit --verbose v<PREVIOUS-VERSION>` to find this contributor). CC previous namers and team.
|
||||
5. Ask (via email) the most significant contributor who has not already named a release to name the release (use `devtools/credit --verbose v<PREVIOUS-VERSION>` to find this contributor). CC previous namers and team.
|
||||
|
||||
## Preparing for -rc1
|
||||
|
||||
1. Check that `CHANGELOG.md` is well formatted, ordered in areas, covers all signficant changes, and sub-ordered approximately by user impact & coolness.
|
||||
2. Use `devtools/changelog.py` to collect the changelog entries from pull request commit messages and merge them into the manually maintained `CHANGELOG.md`. This does API queries to GitHub, which are severely
|
||||
ratelimited unless you use an API token: set the `GH_TOKEN` environment variable to a Personal Access Token from <https://github.com/settings/tokens>
|
||||
2. Use `devtools/changelog.py` to collect the changelog entries from pull request commit messages and merge them into the manually maintained `CHANGELOG.md`. This does API queries to GitHub, which are severely ratelimited unless you use an API token: set the `GH_TOKEN` environment variable to a Personal Access Token from https://github.com/settings/tokens
|
||||
3. Create a new CHANGELOG.md heading to `v<VERSION>rc1`, and create a link at the bottom. Note that you should exactly copy the date and name format from a previous release, as the `build-release.sh` script relies on this.
|
||||
4. Update the package versions: `uv run make update-versions NEW_VERSION=v<VERSION>rc1`
|
||||
5. Create a PR with the above.
|
||||
@@ -73,8 +71,7 @@ Here's a checklist for the release process.
|
||||
- Build reproducible Ubuntu-v20.04, Ubuntu-v22.04 and Ubuntu-v24.04 images. Follow [link](https://docs.corelightning.org/docs/repro#building-using-the-builder-image) for manually Building Ubuntu Images.
|
||||
- Build Docker images for amd64 and arm64v8. Follow [link](https://docs.corelightning.org/docs/docker-images) for more details on Docker publishing.
|
||||
- Create and sign checksums. Follow [link](https://docs.corelightning.org/docs/repro#co-signing-the-release-manifest) for manually signing the release.
|
||||
8. If you used `--sudo`, the tarballs may be owned by root, so revert ownership if necessary:
|
||||
`sudo chown ${USER}:${USER} *${VERSION}*`
|
||||
8. If you used `--sudo`, the tarballs may be owned by root, so revert ownership if necessary: `sudo chown ${USER}:${USER} *${VERSION}*`
|
||||
9. Verify the checksums match the pre-release `SHA256SUMS-v<VERSION>`, then append your signatures to the official signature `SHA256SUMS-v<VERSION>.asc` file to confirm the build's integrity.
|
||||
10. Send `SHA256SUMS-v<VERSION>` & `SHA256SUMS-v<VERSION>.asc` files to the rest of the team to check and sign the release.
|
||||
11. Team members can verify the release with the help of `build-release.sh`:
|
||||
@@ -89,7 +86,6 @@ Here's a checklist for the release process.
|
||||
- ... repeat for each pyln package with the appropriate token.
|
||||
14. Publish multi-arch Docker images (`elementsproject/lightningd:v${VERSION}` and `elementsproject/lightningd:latest`) to Docker Hub either using the GitHub action `Build and push multi-platform docker images` or by running the `tools/build-release.sh docker` script. Prior to building docker images by `tools/build-release.sh` script, ensure that `multiarch/qemu-user-static` setup is working on your system as described [here](https://docs.corelightning.org/docs/docker-images#setting-up-multiarchqemu-user-static).
|
||||
|
||||
|
||||
## Performing the Release
|
||||
|
||||
1. Edit the GitHub draft and include the `SHA256SUMS-v<VERSION>.asc` file.
|
||||
|
||||
@@ -28,6 +28,7 @@ commando+<protocol>://<cln-host>:<ws-port>?pubkey=<pubkey>&rune=<rune>&invoiceRu
|
||||
```
|
||||
|
||||
#### Parameters:
|
||||
|
||||
- protocol: ws or wss (WebSocket or secure WebSocket)
|
||||
- cln-host: Hostname or IP address of the CLN node
|
||||
- ws-port: WebSocket port
|
||||
@@ -37,7 +38,6 @@ commando+<protocol>://<cln-host>:<ws-port>?pubkey=<pubkey>&rune=<rune>&invoiceRu
|
||||
- certs: A Base64-encoded sequence created by concatenating the client key, client certificate, and CA certificate, in that order.
|
||||
|
||||
#### Example:
|
||||
|
||||
```
|
||||
commando+wss://cln.local:5001?pubkey=023456789abcdef&rune=8hJ6ZKFvRune&invoiceRune=5kJ3ZKFvInvRune&certs=LS0tLS1CRUdJTiBQUklWQVRFIEtFWS0t
|
||||
```
|
||||
@@ -48,6 +48,7 @@ clnrest+<protocol>://<rest-host>:<rest-port>?rune=<rune>&certs=<combined-base64-
|
||||
```
|
||||
|
||||
#### Parameters:
|
||||
|
||||
- protocol: http or https
|
||||
- rest-host: Hostname or IP address of the REST interface
|
||||
- rest-port: REST API port (typically 3010)
|
||||
@@ -55,18 +56,17 @@ clnrest+<protocol>://<rest-host>:<rest-port>?rune=<rune>&certs=<combined-base64-
|
||||
- certs: A Base64-encoded sequence created by concatenating the client key, client certificate, and CA certificate, in that order.
|
||||
|
||||
#### Example:
|
||||
|
||||
```
|
||||
clnrest+https://cln.local:3010?rune=8hJ6ZKFvRune&certs=LS0tLS1CRUdJTiBQUklWQVRFIEtFWS0t
|
||||
```
|
||||
|
||||
## gRPC Connection
|
||||
|
||||
### gRPC Connection
|
||||
```
|
||||
clngrpc://<grpc-host>:<grpc-port>?pubkey=<pubkey>&protoPath=<path-to-proto>&certs=<combined-base64-encoded-clientkey-clientcert-cacert>
|
||||
```
|
||||
|
||||
#### Parameters:
|
||||
|
||||
- grpc-host: Hostname or IP address of the gRPC interface
|
||||
- grpc-port: gRPC port (typically 9736)
|
||||
- pubkey: Node's public key (hex encoded)
|
||||
@@ -74,26 +74,12 @@ clngrpc://<grpc-host>:<grpc-port>?pubkey=<pubkey>&protoPath=<path-to-proto>&cert
|
||||
- certs: A Base64-encoded sequence created by concatenating the client key, client certificate, and CA certificate, in that order.
|
||||
|
||||
#### Example:
|
||||
|
||||
```
|
||||
clngrpc://cln.grpc:9736?pubkey=023456789abcdef&protoPath=/path/to/cln.proto&certs=LS0tLS1CRUdJTiBQUklWQVRFIEtFWS0t
|
||||
```
|
||||
|
||||
## Image of available API interfaces and transport protocols
|
||||
|
||||
[block:image]
|
||||
{
|
||||
"images": [
|
||||
{
|
||||
"image": [
|
||||
"https://files.readme.io/3eeb3ddc8687fa45432c215777e478c40998bf94c42aeb1591c8096aac102e40-CLN-App-Development.png",
|
||||
null,
|
||||
"A visual chart of all interface and transport protocols to interact with a CLN node."
|
||||
],
|
||||
"align": "center",
|
||||
"border": true,
|
||||
"caption": "A visual chart of available API interfaces and transport protocols for interacting with a CLN node"
|
||||
}
|
||||
]
|
||||
}
|
||||
[/block]
|
||||

|
||||
|
||||
*A visual chart of available API interfaces and transport protocols for interacting with a CLN node*
|
||||
|
||||
@@ -21,58 +21,26 @@ You can use `lightning-cli help` to print a table of RPC methods; `lightning-cli
|
||||
### Installation
|
||||
|
||||
`pyln-client` is available on `pip`:
|
||||
|
||||
```shell
|
||||
pip install pyln-client
|
||||
```
|
||||
|
||||
|
||||
|
||||
Alternatively you can also install the development version to get access to currently unreleased features by checking out the Core Lightning source code and installing into your python3 environment:
|
||||
|
||||
```shell
|
||||
git clone https://github.com/ElementsProject/lightning.git
|
||||
cd lightning/contrib/pyln-client
|
||||
uv sync
|
||||
```
|
||||
|
||||
|
||||
|
||||
This will add links to the library into your environment so changing the checked out source code will also result in the environment picking up these changes. Notice however that unreleased versions may change API without warning, so test thoroughly with the released version.
|
||||
|
||||
### Tutorials
|
||||
|
||||
Check out the following recipes to learn how to use pyln-client in your applications.
|
||||
|
||||
🦉 **[Write a program in Python to interact with lightningd](https://docs.corelightning.org/v1.0/recipes/write-a-program-in-python-to-interact-with-lightningd)**
|
||||
|
||||
[block:tutorial-tile]
|
||||
{
|
||||
"backgroundColor": "#dfb316",
|
||||
"emoji": "🦉",
|
||||
"id": "63dbbcd59880f6000e329079",
|
||||
"link": "https://docs.corelightning.org/v1.0/recipes/write-a-program-in-python-to-interact-with-lightningd",
|
||||
"slug": "write-a-program-in-python-to-interact-with-lightningd",
|
||||
"title": "Write a program in Python to interact with lightningd"
|
||||
}
|
||||
[/block]
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
[block:tutorial-tile]
|
||||
{
|
||||
"backgroundColor": "#dfb316",
|
||||
"emoji": "🦉",
|
||||
"id": "63dbd6993ef79b07b8f399be",
|
||||
"link": "https://docs.corelightning.org/v1.0/recipes/write-a-hello-world-plugin-in-python",
|
||||
"slug": "write-a-hello-world-plugin-in-python",
|
||||
"title": "Write a hello-world plugin in Python"
|
||||
}
|
||||
[/block]
|
||||
|
||||
|
||||
|
||||
🦉 **[Write a hello-world plugin in Python](https://docs.corelightning.org/v1.0/recipes/write-a-hello-world-plugin-in-python)**
|
||||
|
||||
## Using Rust
|
||||
|
||||
@@ -81,19 +49,13 @@ Check out the following recipes to learn how to use pyln-client in your applicat
|
||||
### Installation
|
||||
|
||||
Run the following Cargo command in your project directory:
|
||||
|
||||
```shell
|
||||
cargo add cln-rpc
|
||||
```
|
||||
|
||||
|
||||
|
||||
Or add the following line to your Cargo.toml:
|
||||
|
||||
```Text Cargo.toml
|
||||
```toml
|
||||
cln-rpc = "0.1.2"
|
||||
```
|
||||
|
||||
|
||||
|
||||
Documentation for the `cln-rpc` crate is available at <https://docs.rs/cln-rpc/>.
|
||||
Documentation for the `cln-rpc` crate is available at https://docs.rs/cln-rpc/.
|
||||
|
||||
@@ -4,17 +4,14 @@ slug: wss-proxy
|
||||
privacy:
|
||||
view: public
|
||||
---
|
||||
|
||||
# WSS-Proxy
|
||||
|
||||
The WSS Proxy plugin is a Rust-based proxy server. It facilitates encrypted communication between clients and WebSocket server. It acts as an intermediary, forwarding RPC JSON commands from the client to the WebSocket server. Once the WebSocket server processes these commands and generates a response, the proxy server relays that response back to the client. This creates a seamless interaction bridge between the client and server.
|
||||
|
||||
|
||||
## Installation
|
||||
|
||||
The plugin is built-in with Core Lightning.
|
||||
|
||||
|
||||
## Configuration
|
||||
|
||||
> 🚧
|
||||
@@ -27,18 +24,15 @@ The plugin is built-in with Core Lightning.
|
||||
|
||||
If `wss-bind-addr` is not specified, the plugin will disable itself.
|
||||
|
||||
- --wss-bind-addr: WSS proxy addresses to connect with WS. This option can be used multiple times to add more addresses. Format [<wss-host>:<wss-port>].
|
||||
|
||||
- --wss-certs: Defines the path for cert & key. Default path is same as RPC file path to utilize gRPC/clnrest's client certificate.
|
||||
If it is missing at the configured location, new identity will be generated.
|
||||
- --wss-bind-addr: WSS proxy addresses to connect with WS. This option can be used multiple times to add more addresses. Format `[<wss-host>:<wss-port>]`.
|
||||
|
||||
- --wss-certs: Defines the path for cert & key. Default path is same as RPC file path to utilize gRPC/clnrest's client certificate. If it is missing at the configured location, new identity will be generated.
|
||||
```
|
||||
wss-bind-addr=127.0.0.1:5002
|
||||
wss-certs=/home/user/.lightning/regtest
|
||||
```
|
||||
|
||||
### lnmessage Client Example
|
||||
|
||||
```javascript
|
||||
import Lnmessage from 'lnmessage';
|
||||
import crypto from 'crypto';
|
||||
@@ -81,5 +75,4 @@ async function connect() {
|
||||
}
|
||||
|
||||
connect();
|
||||
|
||||
```
|
||||
|
||||
@@ -8,39 +8,11 @@ privacy:
|
||||
|
||||
Check out a step-by-step recipe for building a simple `helloworld.py` example plugin based on [pyln-client](https://github.com/ElementsProject/lightning/tree/master/contrib/pyln-client).
|
||||
|
||||
|
||||
[block:tutorial-tile]
|
||||
{
|
||||
"backgroundColor": "#dfb316",
|
||||
"emoji": "🦉",
|
||||
"id": "63dbd6993ef79b07b8f399be",
|
||||
"link": "https://docs.corelightning.org/v1.0/recipes/write-a-hello-world-plugin-in-python",
|
||||
"slug": "write-a-hello-world-plugin-in-python",
|
||||
"title": "Write a hello-world plugin in Python"
|
||||
}
|
||||
[/block]
|
||||
|
||||
|
||||
|
||||
🦉 **[Write a hello-world plugin in Python](https://docs.corelightning.org/v1.0/recipes/write-a-hello-world-plugin-in-python)**
|
||||
|
||||
You can also follow along the video below where Blockstream Engineer Rusty Russell walks you all the way from getting started with Core Lightning to building a plugin in Python.
|
||||
|
||||
|
||||
[block:embed]
|
||||
{
|
||||
"html": "<iframe class=\"embedly-embed\" src=\"//cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fwww.youtube.com%2Fembed%2Ffab4P3BIZxk%3Ffeature%3Doembed&display_name=YouTube&url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3Dfab4P3BIZxk&image=https%3A%2F%2Fi.ytimg.com%2Fvi%2Ffab4P3BIZxk%2Fhqdefault.jpg&key=7788cb384c9f4d5dbbdbeffd9fe4b92f&type=text%2Fhtml&schema=youtube\" width=\"854\" height=\"480\" scrolling=\"no\" title=\"YouTube embed\" frameborder=\"0\" allow=\"autoplay; fullscreen\" allowfullscreen=\"true\"></iframe>",
|
||||
"url": "https://www.youtube.com/watch?v=fab4P3BIZxk",
|
||||
"title": "Rusty Russell | Getting Started with c-lightning | July 2019",
|
||||
"favicon": "https://www.google.com/favicon.ico",
|
||||
"image": "https://i.ytimg.com/vi/fab4P3BIZxk/hqdefault.jpg",
|
||||
"provider": "youtube.com",
|
||||
"href": "https://www.youtube.com/watch?v=fab4P3BIZxk",
|
||||
"typeOfEmbed": "youtube"
|
||||
}
|
||||
[/block]
|
||||
|
||||
|
||||
|
||||
**[▶️ Rusty Russell | Getting Started with c-lightning | July 2019](https://www.youtube.com/watch?v=fab4P3BIZxk)**
|
||||
|
||||
Finally, `lightningd`'s own internal [tests](https://github.com/ElementsProject/lightning/tree/master/tests/plugins) can be a useful (and most reliable) resource.
|
||||
|
||||
|
||||
@@ -4,25 +4,20 @@ slug: hooks
|
||||
privacy:
|
||||
view: public
|
||||
---
|
||||
Hooks allow a plugin to define custom behavior for `lightningd` without having to modify the Core Lightning source code itself. A plugin declares that it'd like to be consulted on what to do next for certain events in the daemon. A hook can then decide how `lightningd` should
|
||||
react to the given event.
|
||||
Hooks allow a plugin to define custom behavior for `lightningd` without having to modify the Core Lightning source code itself. A plugin declares that it'd like to be consulted on what to do next for certain events in the daemon. A hook can then decide how `lightningd` should react to the given event.
|
||||
|
||||
When hooks are registered, they can optionally specify "before" and "after" arrays of plugin names, which control what order they will be called in. If a plugin name is unknown, it is ignored, otherwise if the hook calls cannot be ordered to satisfy the specifications of all plugin hooks, the plugin registration will fail.
|
||||
When hooks are registered, they can optionally specify "before" and "after" arrays of plugin names, which control what order they will be called in. If a plugin name is unknown, it is ignored, otherwise if the hook calls cannot be ordered to satisfy the specifications of all plugin hooks, the plugin registration will fail.
|
||||
|
||||
The call semantics of the hooks, i.e., when and how hooks are called, depend on the hook type. Most hooks are currently set to `single`-mode. In this mode only a single plugin can register the hook, and that plugin will get called for each event of that type. If a second plugin attempts to register the hook it gets killed and a corresponding log entry will be added to the logs.
|
||||
|
||||
In `chain`-mode multiple plugins can register for the hook type and they are called in any order they are loaded (i.e. cmdline order first, configuration order file second: though note that the order of plugin directories is implementation-dependent), overridden only by `before` and `after` requirements the plugin's hook registrations specify. Each plugin can then handle the event or defer by returning a `continue` result like the following:
|
||||
|
||||
```json
|
||||
{
|
||||
"result": "continue"
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
|
||||
The remainder of the response is ignored and if there are any more plugins that have registered the hook the next one gets called. If there are no more plugins then the internal handling is resumed as if no hook had been called. Any other result returned by a plugin is considered an exit from the chain. Upon exit no more plugin hooks are called for the current event, and
|
||||
the result is executed. Unless otherwise stated all hooks are `single`-mode.
|
||||
The remainder of the response is ignored and if there are any more plugins that have registered the hook the next one gets called. If there are no more plugins then the internal handling is resumed as if no hook had been called. Any other result returned by a plugin is considered an exit from the chain. Upon exit no more plugin hooks are called for the current event, and the result is executed. Unless otherwise stated all hooks are `single`-mode.
|
||||
|
||||
Hooks and notifications are very similar, however there are a few key differences:
|
||||
|
||||
@@ -36,7 +31,6 @@ As a convention, for all hooks, returning the object `{ "result" : "continue" }`
|
||||
### `peer_connected`
|
||||
|
||||
This hook is called whenever a peer has connected and successfully completed the cryptographic handshake. The parameters have the following structure:
|
||||
|
||||
```json
|
||||
{
|
||||
"peer": {
|
||||
@@ -48,17 +42,15 @@ This hook is called whenever a peer has connected and successfully completed the
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
|
||||
The hook is sparse on information, since the plugin can use the JSON-RPC `listpeers` command to get additional details should they be required. `direction` is either `"in"` or `"out"`. The `addr` field shows the address that we are connected to ourselves, not the gossiped list of known addresses. In particular this means that the port for incoming connections is an ephemeral port, that may not be available for reconnections.
|
||||
|
||||
The returned result must contain a `result` member which is either the string `disconnect` or `continue`. If `disconnect` and there's a member `error_message`, that member is sent to the peer before disconnection.
|
||||
The returned result must contain a `result` member which is either the string `disconnect` or `continue`. If `disconnect` and there's a member `error_message`, that member is sent to the peer before disconnection.
|
||||
|
||||
Note that `peer_connected` is a chained hook. The first plugin that decides to `disconnect` with or without an `error_message` will lead to the subsequent plugins not being called anymore.
|
||||
|
||||
### `recover`
|
||||
|
||||
This hook is called whenever the node is started using the --recovery flag. So basically whenever a user wants to recover their node with a codex32 secret, they can use --recover=<codex32secret> to use that secret as their HSM Secret.
|
||||
This hook is called whenever the node is started using the --recovery flag. So basically whenever a user wants to recover their node with a codex32 secret, they can use --recover="codex32secret" to use that secret as their HSM Secret.
|
||||
|
||||
The payload consists of the following information:
|
||||
```json
|
||||
@@ -78,10 +70,9 @@ This hook is called whenever a channel state is updated, and the old state was r
|
||||
3. Verification that the signatures match the commitment transaction
|
||||
4. Exchange of revocation secrets that could be used to penalize an eventual misbehaving party
|
||||
|
||||
The `commitment_revocation` hook is used to inform the plugin about the state transition being completed, and deliver the penalty transaction. The penalty transaction could then be sent to a watchtower that automaticaly reacts in case one party attempts to settle using a revoked commitment.
|
||||
The `commitment_revocation` hook is used to inform the plugin about the state transition being completed, and deliver the penalty transaction. The penalty transaction could then be sent to a watchtower that automatically reacts in case one party attempts to settle using a revoked commitment.
|
||||
|
||||
The payload consists of the following information:
|
||||
|
||||
```json
|
||||
{
|
||||
"commitment_txid": "58eea2cf538cfed79f4d6b809b920b40bb6b35962c4bb4cc81f5550a7728ab05",
|
||||
@@ -91,8 +82,6 @@ The payload consists of the following information:
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
|
||||
Notice that the `commitment_txid` could also be extracted from the sole input of the `penalty_tx`, however it is enclosed so plugins don't have to include the logic to parse transactions.
|
||||
|
||||
Not included are the `htlc_success` and `htlc_failure` transactions that may also be spending `commitment_tx` outputs. This is because these transactions are much more dynamic and have a predictable timeout, allowing wallets to ensure a quick checkin when the CLTV of the HTLC is about to expire.
|
||||
@@ -101,8 +90,7 @@ The `commitment_revocation` hook is a chained hook, i.e., multiple plugins can r
|
||||
|
||||
### `db_write`
|
||||
|
||||
This hook is called whenever a change is about to be committed to the database, if you are using a SQLITE3 database (the default).
|
||||
This hook will be useless (the `"writes"` field will always be empty) if you are using a PostgreSQL database.
|
||||
This hook is called whenever a change is about to be committed to the database, if you are using a SQLITE3 database (the default). This hook will be useless (the `"writes"` field will always be empty) if you are using a PostgreSQL database.
|
||||
|
||||
It is currently extremely restricted:
|
||||
|
||||
@@ -111,7 +99,6 @@ It is currently extremely restricted:
|
||||
3. the hook will be called before your plugin is initialized!
|
||||
|
||||
This hook, unlike all the other hooks, is also strongly synchronous: `lightningd` will stop almost all the other processing until this hook responds.
|
||||
|
||||
```json
|
||||
{
|
||||
"data_version": 42,
|
||||
@@ -121,8 +108,6 @@ This hook, unlike all the other hooks, is also strongly synchronous: `lightningd
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
|
||||
This hook is intended for creating continuous backups. The intent is that your backup plugin maintains three pieces of information (possibly in separate files):
|
||||
|
||||
1. a snapshot of the database
|
||||
@@ -131,32 +116,24 @@ This hook is intended for creating continuous backups. The intent is that your b
|
||||
|
||||
`data_version` is an unsigned 32-bit number that will always increment by 1 each time `db_write` is called. Note that this will wrap around on the limit of 32-bit numbers.
|
||||
|
||||
`writes` is an array of strings, each string being a database query that modifies the database.
|
||||
If the `data_version` above is validated correctly, then you can simply append this to the log of database queries.
|
||||
`writes` is an array of strings, each string being a database query that modifies the database. If the `data_version` above is validated correctly, then you can simply append this to the log of database queries.
|
||||
|
||||
Your plugin **MUST** validate the `data_version`. It **MUST** keep track of the previous `data_version` it got, and:
|
||||
|
||||
1. If the new `data_version` is **_exactly_** one higher than the previous, then this is the ideal case and nothing bad happened and we should save this and continue.
|
||||
2. If the new `data_version` is **_exactly_** the same value as the previous, then the previous set of queries was not committed.
|
||||
Your plugin **MAY** overwrite the previous set of queries with the current set, or it **MAY** overwrite its entire backup with a new snapshot of the database and the current `writes`
|
||||
array (treating this case as if `data_version` were two or more higher than the previous).
|
||||
2. If the new `data_version` is **_exactly_** the same value as the previous, then the previous set of queries was not committed. Your plugin **MAY** overwrite the previous set of queries with the current set, or it **MAY** overwrite its entire backup with a new snapshot of the database and the current `writes` array (treating this case as if `data_version` were two or more higher than the previous).
|
||||
3. If the new `data_version` is **_less than_** the previous, your plugin **MUST** halt and catch fire, and have the operator inspect what exactly happened here.
|
||||
4. Otherwise, some queries were lost and your plugin **SHOULD** recover by creating a new snapshot of the database: copy the database file, back up the given `writes` array, then delete (or atomically `rename` if in a POSIX filesystem) the previous backups of the database and SQL statements, or you **MAY** fail the hook to abort `lightningd`.
|
||||
|
||||
The "rolling up" of the database could be done periodically as well if the log of SQL statements has grown large.
|
||||
|
||||
Any response other than `{"result": "continue"}` will cause lightningd to error without
|
||||
committing to the database!
|
||||
This is the expected way to halt and catch fire.
|
||||
Any response other than `{"result": "continue"}` will cause lightningd to error without committing to the database! This is the expected way to halt and catch fire.
|
||||
|
||||
`db_write` is a parallel-chained hook, i.e., multiple plugins can register it, and all of them will be invoked simultaneously without regard for order of registration.
|
||||
The hook is considered handled if all registered plugins return `{"result": "continue"}`.
|
||||
If any plugin returns anything else, `lightningd` will error without committing to the database.
|
||||
`db_write` is a parallel-chained hook, i.e., multiple plugins can register it, and all of them will be invoked simultaneously without regard for order of registration. The hook is considered handled if all registered plugins return `{"result": "continue"}`. If any plugin returns anything else, `lightningd` will error without committing to the database.
|
||||
|
||||
### `invoice_payment`
|
||||
|
||||
This hook is called whenever a valid payment for an unpaid invoice has arrived.
|
||||
|
||||
```json
|
||||
{
|
||||
"payment": {
|
||||
@@ -166,15 +143,14 @@ This hook is called whenever a valid payment for an unpaid invoice has arrived.
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Before version `23.11` the `msat` field was a string with msat-suffix, e.g: `"10000msat"`.
|
||||
|
||||
The hook is deliberately sparse, since the plugin can use the JSON-RPC `listinvoices` command to get additional details about this invoice. It can return a `failure_message` field as defined for final nodes in [BOLT 4](https://github.com/lightning/bolts/blob/master/04-onion-routing.md#failure-messages), a `result` field with the string
|
||||
`reject` to fail it with `incorrect_or_unknown_payment_details`, or a `result` field with the string `continue` to accept the payment.
|
||||
The hook is deliberately sparse, since the plugin can use the JSON-RPC `listinvoices` command to get additional details about this invoice. It can return a `failure_message` field as defined for final nodes in [BOLT 4](https://github.com/lightning/bolts/blob/master/04-onion-routing.md#failure-messages), a `result` field with the string `reject` to fail it with `incorrect_or_unknown_payment_details`, or a `result` field with the string `continue` to accept the payment.
|
||||
|
||||
### `openchannel`
|
||||
|
||||
This hook is called whenever a remote peer tries to fund a channel to us using the v1 protocol, and it has passed basic sanity checks:
|
||||
|
||||
```json
|
||||
{
|
||||
"openchannel": {
|
||||
@@ -193,16 +169,13 @@ This hook is called whenever a remote peer tries to fund a channel to us using t
|
||||
}
|
||||
```
|
||||
|
||||
There may be additional fields, including `shutdown_scriptpubkey` and a hex-string. You can see the definitions of these fields in [BOLT 2's description of the open_channel message](https://github.com/lightning/bolts/blob/master/02-peer-protocol.md#the-open_channel-message).
|
||||
|
||||
|
||||
There may be additional fields, including `shutdown_scriptpubkey` and a hex-string. You can see the definitions of these fields in [BOLT 2's description of the open_channel message](https://github.com/lightning/bolts/blob/master/02-peer-protocol.md#the-open_channel-message).
|
||||
|
||||
The returned result must contain a `result` member which is either the string `reject` or `continue`. If `reject` and there's a member `error_message`, that member is sent to the peer before disconnection.
|
||||
The returned result must contain a `result` member which is either the string `reject` or `continue`. If `reject` and there's a member `error_message`, that member is sent to the peer before disconnection.
|
||||
|
||||
For a 'continue'd result, you can also include a `close_to` address, which will be used as the output address for a mutual close transaction.
|
||||
|
||||
e.g.
|
||||
|
||||
```json
|
||||
{
|
||||
"result": "continue",
|
||||
@@ -212,22 +185,17 @@ e.g.
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
|
||||
Note that `close_to` must be a valid address for the current chain, an invalid address will cause the node to exit with an error.
|
||||
|
||||
- `mindepth` is the number of confirmations to require before making the channel usable. Notice that setting this to 0 (`zeroconf`) or some other low value might expose you to double-spending issues, so only lower this value from the default if you trust the peer not to
|
||||
double-spend, or you reject incoming payments, including forwards, until the funding is confirmed.
|
||||
- `mindepth` is the number of confirmations to require before making the channel usable. Notice that setting this to 0 (`zeroconf`) or some other low value might expose you to double-spending issues, so only lower this value from the default if you trust the peer not to double-spend, or you reject incoming payments, including forwards, until the funding is confirmed.
|
||||
|
||||
- `reserve` is an absolute value for the amount in the channel that the peer must keep on their side. This ensures that they always have something to lose, so only lower this below the 1% of funding amount if you trust the peer. The protocol requires this to be larger than the dust limit, hence it will be adjusted to be the dust limit if the specified value is below.
|
||||
|
||||
Note that `openchannel` is a chained hook. Therefore `close_to`, `reserve` will only be
|
||||
evaluated for the first plugin that sets it. If more than one plugin tries to set a `close_to` address an error will be logged.
|
||||
Note that `openchannel` is a chained hook. Therefore `close_to`, `reserve` will only be evaluated for the first plugin that sets it. If more than one plugin tries to set a `close_to` address an error will be logged.
|
||||
|
||||
### `openchannel2`
|
||||
|
||||
This hook is called whenever a remote peer tries to fund a channel to us using the v2 protocol, and it has passed basic sanity checks:
|
||||
|
||||
```json
|
||||
{
|
||||
"openchannel2": {
|
||||
@@ -254,14 +222,11 @@ This hook is called whenever a remote peer tries to fund a channel to us using t
|
||||
}
|
||||
```
|
||||
|
||||
There may be additional fields, such as `shutdown_scriptpubkey`. You can see the definitions of these fields in [BOLT 2's description of the open_channel message](https://github.com/lightning/bolts/blob/master/02-peer-protocol.md#the-open_channel-message).
|
||||
|
||||
`requested_lease_msat`, `lease_blockheight_start`, and `node_blockheight` are only present if the opening peer has requested a funding lease, per `option_will_fund`.
|
||||
|
||||
There may be additional fields, such as `shutdown_scriptpubkey`. You can see the definitions of these fields in [BOLT 2's description of the open_channel message](https://github.com/lightning/bolts/blob/master/02-peer-protocol.md#the-open_channel-message).
|
||||
|
||||
`requested_lease_msat`, `lease_blockheight_start`, and `node_blockheight` are
|
||||
only present if the opening peer has requested a funding lease, per `option_will_fund`.
|
||||
|
||||
The returned result must contain a `result` member which is either the string `reject` or `continue`. If `reject` and there's a member `error_message`, that member is sent to the peer before disconnection.
|
||||
The returned result must contain a `result` member which is either the string `reject` or `continue`. If `reject` and there's a member `error_message`, that member is sent to the peer before disconnection.
|
||||
|
||||
For a 'continue'd result, you can also include a `close_to` address, which will be used as the output address for a mutual close transaction; you can include a `psbt` and an `our_funding_msat` to contribute funds, inputs and outputs to this channel open.
|
||||
|
||||
@@ -270,7 +235,6 @@ Note that, like `openchannel_init` RPC call, the `our_funding_msat` amount must
|
||||
See `plugins/funder.c` for an example of how to use this hook to contribute funds to a channel open.
|
||||
|
||||
e.g.
|
||||
|
||||
```json
|
||||
{
|
||||
"result": "continue",
|
||||
@@ -280,8 +244,6 @@ e.g.
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
|
||||
Note that `close_to` must be a valid address for the current chain, an invalid address will cause the node to exit with an error.
|
||||
|
||||
Note that `openchannel` is a chained hook. Therefore `close_to` will only be evaluated for the first plugin that sets it. If more than one plugin tries to set a `close_to` address an error will be logged.
|
||||
@@ -289,7 +251,6 @@ Note that `openchannel` is a chained hook. Therefore `close_to` will only be eva
|
||||
### `openchannel2_changed`
|
||||
|
||||
This hook is called when we received updates to the funding transaction from the peer.
|
||||
|
||||
```json
|
||||
{
|
||||
"openchannel2_changed": {
|
||||
@@ -299,13 +260,9 @@ This hook is called when we received updates to the funding transaction from the
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
|
||||
In return, we expect a `result` indicated to `continue` and an updated `psbt`.
|
||||
If we have no updates to contribute, return the passed in PSBT. Once no changes to the PSBT are made on either side, the transaction construction negotiation will end and commitment transactions will be exchanged.
|
||||
In return, we expect a `result` indicated to `continue` and an updated `psbt`. If we have no updates to contribute, return the passed in PSBT. Once no changes to the PSBT are made on either side, the transaction construction negotiation will end and commitment transactions will be exchanged.
|
||||
|
||||
#### Expected Return
|
||||
|
||||
```json
|
||||
{
|
||||
"result": "continue",
|
||||
@@ -313,14 +270,11 @@ If we have no updates to contribute, return the passed in PSBT. Once no changes
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
|
||||
See `plugins/funder.c` for an example of how to use this hook to continue a v2 channel open.
|
||||
|
||||
### `openchannel2_sign`
|
||||
|
||||
This hook is called after we've gotten the commitment transactions for a channel open. It expects psbt to be returned which contains signatures for our inputs to the funding transaction.
|
||||
|
||||
```json
|
||||
{
|
||||
"openchannel2_sign": {
|
||||
@@ -330,14 +284,11 @@ This hook is called after we've gotten the commitment transactions for a channel
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
|
||||
In return, we expect a `result` indicated to `continue` and an partially signed `psbt`.
|
||||
|
||||
If we have no inputs to sign, return the passed in PSBT. Once we have also received the signatures from the peer, the funding transaction will be broadcast.
|
||||
|
||||
#### Expected Return
|
||||
|
||||
```json
|
||||
{
|
||||
"result": "continue",
|
||||
@@ -345,14 +296,11 @@ If we have no inputs to sign, return the passed in PSBT. Once we have also recei
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
|
||||
See `plugins/funder.c` for an example of how to use this hook to sign a funding transaction.
|
||||
|
||||
### `rbf_channel`
|
||||
|
||||
Similar to `openchannel2`, the `rbf_channel` hook is called when a peer requests an RBF for a channel funding transaction.
|
||||
|
||||
```json
|
||||
{
|
||||
"rbf_channel": {
|
||||
@@ -371,16 +319,13 @@ Similar to `openchannel2`, the `rbf_channel` hook is called when a peer requests
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
|
||||
The returned result must contain a `result` member which is either the string `reject` or `continue`. If `reject` and there's a member `error_message`, that member is sent to the peer before disconnection.
|
||||
The returned result must contain a `result` member which is either the string `reject` or `continue`. If `reject` and there's a member `error_message`, that member is sent to the peer before disconnection.
|
||||
|
||||
For a 'continue'd result, you can include a `psbt` and an `our_funding_msat` to contribute funds, inputs and outputs to this channel open.
|
||||
|
||||
Note that, like the `openchannel_init` RPC call, the `our_funding_msat` amount must NOT be accounted for in any supplied output. Change, however, should be included and should use the `funding_feerate_per_kw` to calculate.
|
||||
|
||||
#### Return
|
||||
|
||||
```json
|
||||
{
|
||||
"result": "continue",
|
||||
@@ -389,14 +334,11 @@ Note that, like the `openchannel_init` RPC call, the `our_funding_msat` amount m
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
|
||||
### `htlc_accepted`
|
||||
|
||||
The `htlc_accepted` hook is called whenever an incoming HTLC is accepted, and its result determines how `lightningd` should treat that HTLC.
|
||||
|
||||
The payload of the hook call has the following format:
|
||||
|
||||
```json
|
||||
{
|
||||
"peer_id": "02df5ffe895c778e10f7742a6c5b8a0cefbe9465df58b92fadeb883752c8107c8f",
|
||||
@@ -421,14 +363,12 @@ The payload of the hook call has the following format:
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
|
||||
For detailed information about each field please refer to [BOLT 04 of the specification](https://github.com/lightning/bolts/blob/master/04-onion-routing.md), the following is just a brief summary:
|
||||
|
||||
- `peer_id`: is the id of the peer that offered us this htlc.
|
||||
- `onion`:
|
||||
- `payload` contains the unparsed payload that was sent to us from the sender of the payment.
|
||||
- `short_channel_id` determines the channel that the sender is hinting should be used next. Not present if we're the final destination.
|
||||
- `short_channel_id` determines the channel that the sender is hinting should be used next. Not present if we're the final destination.
|
||||
- `forward_amount` is the amount we should be forwarding to the next hop, and should match the incoming funds in case we are the recipient.
|
||||
- `outgoing_cltv_value` determines what the CLTV value for the HTLC that we forward to the next hop should be.
|
||||
- `total_msat` specifies the total amount to pay, if present.
|
||||
@@ -446,25 +386,21 @@ For detailed information about each field please refer to [BOLT 04 of the specif
|
||||
- `forward_to`: if set, the channel_id we intend to forward this to (will not be present if the short_channel_id was invalid or we were the final destination).
|
||||
|
||||
The hook response must have one of the following formats:
|
||||
|
||||
```json
|
||||
{
|
||||
"result": "continue"
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
|
||||
This means that the plugin does not want to do anything special and `lightningd` should continue processing it normally, i.e., resolve the payment if we're the recipient, or attempt to forward it otherwise. Notice that the usual checks such as sufficient fees and CLTV deltas are still enforced.
|
||||
|
||||
It can also replace the `onion.payload` by specifying a `payload` in the response. Note that this is always a TLV-style payload, so unlike `onion.payload` there is no length prefix (and it must be at least 4 hex digits long). This will be re-parsed; it's useful for removing onion fields which a plugin doesn't want lightningd to consider.
|
||||
It can also replace the `onion.payload` by specifying a `payload` in the response. Note that this is always a TLV-style payload, so unlike `onion.payload` there is no length prefix (and it must be at least 4 hex digits long). This will be re-parsed; it's useful for removing onion fields which a plugin doesn't want lightningd to consider.
|
||||
|
||||
It can also specify `forward_to` in the response, replacing the destination. This usually only makes sense if it wants to choose an alternate channel to the same next peer, but is useful if the `payload` is also replaced.
|
||||
It can also specify `forward_to` in the response, replacing the destination. This usually only makes sense if it wants to choose an alternate channel to the same next peer, but is useful if the `payload` is also replaced.
|
||||
|
||||
Also, it can specify `extra_tlvs` in the response. This will replace the TLV-stream `update_add_htlc_tlvs` in the `update_add_htlc` message for forwarded htlcs.
|
||||
|
||||
If the node is the final destination, the plugin can also replace the amount of the invoice that belongs to the `payment_hash` by specifying `invoice_msat`.
|
||||
|
||||
```json
|
||||
{
|
||||
"result": "fail",
|
||||
@@ -472,10 +408,7 @@ If the node is the final destination, the plugin can also replace the amount of
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
|
||||
`fail` will tell `lightningd` to fail the HTLC with a given hex-encoded `failure_message` (please refer to the [spec](https://github.com/lightning/bolts/blob/master/04-onion-routing.md) for details: `incorrect_or_unknown_payment_details` is the most common).
|
||||
|
||||
```json
|
||||
{
|
||||
"result": "fail",
|
||||
@@ -483,10 +416,7 @@ If the node is the final destination, the plugin can also replace the amount of
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
|
||||
Instead of `failure_message` the response can contain a hex-encoded `failure_onion` that will be used instead (please refer to the [spec](https://github.com/lightning/bolts/blob/master/04-onion-routing.md) for details). This can be used, for example, if you're writing a bridge between two Lightning Networks. Note that `lightningd` will apply the obfuscation step to the value returned here with its own shared secret (and key type `ammag`) before returning it to the previous hop.
|
||||
|
||||
```json
|
||||
{
|
||||
"result": "resolve",
|
||||
@@ -494,8 +424,6 @@ Instead of `failure_message` the response can contain a hex-encoded `failure_oni
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
|
||||
`resolve` instructs `lightningd` to claim the HTLC by providing the preimage matching the `payment_hash` presented in the call. Notice that the plugin must ensure that the `payment_key` really matches the `payment_hash` since `lightningd` will not check and the wrong value could result in the channel being closed.
|
||||
|
||||
> 🚧
|
||||
@@ -506,9 +434,7 @@ The `htlc_accepted` hook is a chained hook, i.e., multiple plugins can register
|
||||
|
||||
### `rpc_command`
|
||||
|
||||
The `rpc_command` hook allows a plugin to take over any RPC command. It sends the received JSON-RPC request to the registered plugin. You can optionally specify a "filters" array, containing the command names you want to intercept: without this, all commands will be sent to this hook.
|
||||
|
||||
|
||||
The `rpc_command` hook allows a plugin to take over any RPC command. It sends the received JSON-RPC request to the registered plugin. You can optionally specify a "filters" array, containing the command names you want to intercept: without this, all commands will be sent to this hook.
|
||||
```json
|
||||
{
|
||||
"rpc_command": {
|
||||
@@ -522,23 +448,16 @@ The `rpc_command` hook allows a plugin to take over any RPC command. It sends th
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
|
||||
which can in turn:
|
||||
|
||||
Let `lightningd` execute the command with
|
||||
|
||||
```json
|
||||
{
|
||||
"result" : "continue"
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
|
||||
Replace the request made to `lightningd`:
|
||||
|
||||
```json
|
||||
{
|
||||
"replace": {
|
||||
@@ -553,10 +472,7 @@ Replace the request made to `lightningd`:
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
|
||||
Return a custom response to the request sender:
|
||||
|
||||
```json
|
||||
{
|
||||
"return": {
|
||||
@@ -566,10 +482,7 @@ Return a custom response to the request sender:
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
|
||||
Return a custom error to the request sender:
|
||||
|
||||
```json
|
||||
{
|
||||
"return": {
|
||||
@@ -579,16 +492,13 @@ Return a custom error to the request sender:
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
|
||||
Note: The `rpc_command` hook is chainable. If two or more plugins try to replace/result/error the same `method`, only the first plugin in the chain will be respected. Others will be ignored and a warning will be logged.
|
||||
|
||||
### `custommsg`
|
||||
|
||||
The `custommsg` plugin hook is the receiving counterpart to the [`sendcustommsg`](ref:sendcustommsg) RPC method and allows plugins to handle messages that are not handled internally. The goal of these two components is to allow the implementation of custom protocols or prototypes on top of a Core Lightning node, without having to change the node's implementation itself. Note that if the hook registration specifies "filters" then that should be a JSON array of message numbers, and the hook will only be called for those. Otherwise, the hook is called for all messages not handled internally.
|
||||
The `custommsg` plugin hook is the receiving counterpart to the [`sendcustommsg`](ref:sendcustommsg) RPC method and allows plugins to handle messages that are not handled internally. The goal of these two components is to allow the implementation of custom protocols or prototypes on top of a Core Lightning node, without having to change the node's implementation itself. Note that if the hook registration specifies "filters" then that should be a JSON array of message numbers, and the hook will only be called for those. Otherwise, the hook is called for all messages not handled internally.
|
||||
|
||||
The payload for a call follows this format:
|
||||
|
||||
```json
|
||||
{
|
||||
"peer_id": "02df5ffe895c778e10f7742a6c5b8a0cefbe9465df58b92fadeb883752c8107c8f",
|
||||
@@ -596,10 +506,7 @@ The payload for a call follows this format:
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
|
||||
This payload would have been sent by the peer with the `node_id` matching `peer_id`, and the message has type `0x1337` and contents `ffffffff`. Notice that the messages are currently limited to odd-numbered types and must not match a type that is handled internally by Core Lightning. These limitations are in place in order to avoid conflicts with the internal state tracking, and avoiding disconnections or channel closures, since odd-numbered message can be
|
||||
ignored by nodes (see ["it's ok to be odd" in the specification](https://github.com/lightning/bolts/blob/c74a3bbcf890799d343c62cb05fcbcdc952a1cf3/01-messaging.md#lightning-message-format) for details). The plugin must implement the parsing of the message, including the type prefix, since Core Lightning does not know how to parse the message.
|
||||
This payload would have been sent by the peer with the `node_id` matching `peer_id`, and the message has type `0x1337` and contents `ffffffff`. Notice that the messages are currently limited to odd-numbered types and must not match a type that is handled internally by Core Lightning. These limitations are in place in order to avoid conflicts with the internal state tracking, and avoiding disconnections or channel closures, since odd-numbered message can be ignored by nodes (see ["it's ok to be odd" in the specification](https://github.com/lightning/bolts/blob/c74a3bbcf890799d343c62cb05fcbcdc952a1cf3/01-messaging.md#lightning-message-format) for details). The plugin must implement the parsing of the message, including the type prefix, since Core Lightning does not know how to parse the message.
|
||||
|
||||
Because this is a chained hook, the daemon expects the result to be `{'result': 'continue'}`. It will fail if something else is returned.
|
||||
|
||||
@@ -607,12 +514,11 @@ Because this is a chained hook, the daemon expects the result to be `{'result':
|
||||
|
||||
These two hooks are almost identical, in that they are called when an onion message is received.
|
||||
|
||||
`onion_message_recv` is used for unsolicited messages (where the source knows that it is sending to this node), and `onion_message_recv_secret` is used for messages which use a blinded path we supplied. The latter hook will have a `pathsecret` field, the former never will.
|
||||
`onion_message_recv` is used for unsolicited messages (where the source knows that it is sending to this node), and `onion_message_recv_secret` is used for messages which use a blinded path we supplied. The latter hook will have a `pathsecret` field, the former never will.
|
||||
|
||||
These hooks are separate, because replies MUST be ignored unless they use the correct path (i.e. `onion_message_recv_secret`, with the expected `pathsecret`). This avoids the source trying to probe for responses without using the designated delivery path.
|
||||
These hooks are separate, because replies MUST be ignored unless they use the correct path (i.e. `onion_message_recv_secret`, with the expected `pathsecret`). This avoids the source trying to probe for responses without using the designated delivery path.
|
||||
|
||||
The payload for a call follows this format:
|
||||
|
||||
```json
|
||||
{
|
||||
"onion_message": {
|
||||
|
||||
@@ -11,10 +11,9 @@ privacy:
|
||||
To make your plugin compatible with reckless install:
|
||||
|
||||
- Choose a unique plugin name.
|
||||
- The plugin entrypoint is inferred. Naming your plugin executable the same as your plugin name will allow reckless to identify it correctly (file extensions are okay).
|
||||
- For python plugins, a requirements.txt is the preferred medium for python dependencies. A pyproject.toml will be used as a fallback, but test installation via `pip install -e .` - Poetry looks for additional files in the working directory, whereas with pip, any
|
||||
references to these will require something like `packages = [{ include = "*.py" }]` under the `[tool.poetry]` section.
|
||||
- Additional repository sources may be added with `reckless source add https://my.repo.url/here` however <https://github.com/lightningd/plugins> is included by default. Consider adding your plugin lightningd/plugins to make installation simpler.
|
||||
- The plugin entrypoint is inferred. Naming your plugin executable the same as your plugin name will allow reckless to identify it correctly (file extensions are okay).
|
||||
- For python plugins, a requirements.txt is the preferred medium for python dependencies. A pyproject.toml will be used as a fallback, but test installation via `pip install -e .` - Poetry looks for additional files in the working directory, whereas with pip, any references to these will require something like `packages = [{ include = "*.py" }]` under the `[tool.poetry]` section.
|
||||
- Additional repository sources may be added with `reckless source add https://my.repo.url/here` however https://github.com/lightningd/plugins is included by default. Consider adding your plugin lightningd/plugins to make installation simpler.
|
||||
- If your plugin is located in a subdirectory of your repo with a different name than your plugin, it will likely be overlooked.
|
||||
|
||||
> 📘
|
||||
|
||||
@@ -4,6 +4,7 @@ slug: bitcoin-core
|
||||
privacy:
|
||||
view: public
|
||||
---
|
||||
|
||||
# Using a pruned Bitcoin Core node
|
||||
|
||||
Core Lightning requires JSON-RPC access to a fully synchronized `bitcoind` in order to synchronize with the Bitcoin network.
|
||||
@@ -18,8 +19,8 @@ In order to avoid this situation you should be monitoring the gap between Core L
|
||||
|
||||
# Connecting to Bitcoin Core remotely
|
||||
|
||||
You can use _trusted_ third-party plugins as bitcoin backends instead of using your own node.
|
||||
You can use _trusted_ third-party plugins as bitcoin backends instead of using your own node.
|
||||
|
||||
- [sauron](https://github.com/lightningd/plugins/tree/master/sauron) is a bitcoin backend plugin relying on [Esplora](https://github.com/Blockstream/esplora).
|
||||
- [trustedcoin](https://github.com/nbd-wtf/trustedcoin) is a plugin that uses block explorers (blockstream.info, mempool.space, blockchair.com and blockchain.info) as backends instead of your own bitcoin node.
|
||||
- [bps](https://github.com/coinos/bps) is a proxy server that exposes just the RPC commands that lightning needs. There's a public endpoint at <https://coinos.io/proxy> or you can host your own.
|
||||
- [bps](https://github.com/coinos/bps) is a proxy server that exposes just the RPC commands that lightning needs. There's a public endpoint at https://coinos.io/proxy or you can host your own.
|
||||
|
||||
@@ -10,8 +10,6 @@ privacy:
|
||||
# Binaries
|
||||
|
||||
If you're on Ubuntu, you need to install bitcoind:
|
||||
|
||||
|
||||
```shell
|
||||
sudo apt-get install -y software-properties-common
|
||||
sudo snap install bitcoin-core
|
||||
@@ -20,30 +18,26 @@ sudo ln -s /snap/bitcoin-core/current/bin/bitcoin{d,-cli} /usr/local/bin/
|
||||
```
|
||||
|
||||
Then you can fetch a pre-compiled binary from the [releases](https://github.com/ElementsProject/lightning/releases) page on GitHub. Core Lightning provides binaries for both Ubuntu and Fedora distributions. Normally these binaries are extracted into /usr/local:
|
||||
|
||||
```shell
|
||||
sudo rm -R /usr/local/libexec/c-lightning/plugins # If you are upgrading run this first to avoid plugin conflicts
|
||||
sudo tar -xvf <release>.tar.xz -C /usr/local --strip-components=2
|
||||
```
|
||||
|
||||
If you're on a different distribution or OS, you can compile the source by following the instructions from [Installing from Source](<>).
|
||||
If you're on a different distribution or OS, you can compile the source by following the instructions from [Installing from Source](doc:installing-from-source).
|
||||
|
||||
# Docker
|
||||
|
||||
To install the Docker image for the latest stable release:
|
||||
|
||||
```shell
|
||||
docker pull elementsproject/lightningd:latest
|
||||
```
|
||||
|
||||
To install for a specific version, for example, 24.05:
|
||||
|
||||
```shell
|
||||
docker pull elementsproject/lightningd:v24.05
|
||||
```
|
||||
|
||||
To run the Docker container:
|
||||
|
||||
```shell
|
||||
docker run --rm --init -v /path/on/host/lightning-data:/root/.lightning -p 9735:9735 -p 9835:9835 lightningd
|
||||
```
|
||||
@@ -79,7 +73,6 @@ You will also need a version of bitcoind with segregated witness and `estimatesm
|
||||
OS version: Ubuntu 15.10 or above
|
||||
|
||||
Get dependencies:
|
||||
|
||||
```shell
|
||||
sudo apt-get update
|
||||
sudo apt-get install -y \
|
||||
@@ -91,9 +84,7 @@ curl -LsSf https://astral.sh/uv/install.sh | sh
|
||||
|
||||
After installing uv, restart your shell or run `source ~/.bashrc` to ensure `uv` is in your PATH.
|
||||
|
||||
|
||||
If you don't have Bitcoin installed locally you'll need to install that as well. It's now available via [snapd](https://snapcraft.io/bitcoin-core).
|
||||
|
||||
```shell
|
||||
sudo apt-get install snapd
|
||||
sudo snap install bitcoin-core
|
||||
@@ -103,27 +94,23 @@ sudo ln -s /snap/bitcoin-core/current/bin/bitcoin{d,-cli} /usr/local/bin/
|
||||
```
|
||||
|
||||
Clone lightning:
|
||||
|
||||
```shell
|
||||
git clone https://github.com/ElementsProject/lightning.git
|
||||
cd lightning
|
||||
```
|
||||
|
||||
Checkout a release tag:
|
||||
|
||||
```shell
|
||||
git checkout v25.02
|
||||
```
|
||||
|
||||
For development or running tests, get additional dependencies:
|
||||
|
||||
```shell
|
||||
sudo apt-get install -y valgrind libpq-dev shellcheck cppcheck \
|
||||
libsecp256k1-dev lowdown
|
||||
```
|
||||
|
||||
If you want to build the Rust plugins (cln-grpc, clnrest, cln-bip353 and wss-proxy):
|
||||
|
||||
```shell
|
||||
sudo apt-get install -y cargo rustfmt protobuf-compiler
|
||||
```
|
||||
@@ -132,11 +119,9 @@ sudo apt-get install -y cargo rustfmt protobuf-compiler
|
||||
>
|
||||
> If your build fails because of your Rust version, you might want to check out [rustup](https://rustup.rs/) to install a newer version
|
||||
|
||||
|
||||
There are two ways to build core lightning, and this depends on how you want use it.
|
||||
|
||||
To build CLN for production:
|
||||
|
||||
```shell
|
||||
uv sync --all-extras --all-groups --frozen
|
||||
./configure
|
||||
@@ -146,10 +131,9 @@ sudo RUST_PROFILE=release make install
|
||||
|
||||
> 📘
|
||||
>
|
||||
> If you want to disable Rust because you don’t need it or its plugins (cln-grpc, clnrest, cln-bip353 or wss-proxy), you can use `./configure --disable-rust`.
|
||||
> If you want to disable Rust because you don't need it or its plugins (cln-grpc, clnrest, cln-bip353 or wss-proxy), you can use `./configure --disable-rust`.
|
||||
|
||||
To build CLN for development:
|
||||
|
||||
```shell
|
||||
uv sync --all-extras --all-groups --frozen
|
||||
./configure
|
||||
@@ -160,7 +144,6 @@ uv run make check VALGRIND=0
|
||||
Optionally, add `-j$(nproc)` after `make` to speed up compilation. (e.g. `make -j$(nproc)`)
|
||||
|
||||
Running lightning:
|
||||
|
||||
```shell
|
||||
bitcoind &
|
||||
./lightningd/lightningd &
|
||||
@@ -172,7 +155,6 @@ bitcoind &
|
||||
OS version: Fedora 27 or above
|
||||
|
||||
Get dependencies:
|
||||
|
||||
```shell
|
||||
sudo dnf update -y && \
|
||||
sudo dnf groupinstall -y \
|
||||
@@ -199,20 +181,17 @@ sudo dnf update -y && \
|
||||
Make sure you have [bitcoind](https://github.com/bitcoin/bitcoin) available to run.
|
||||
|
||||
Clone lightning:
|
||||
|
||||
```shell
|
||||
git clone https://github.com/ElementsProject/lightning.git
|
||||
cd lightning
|
||||
```
|
||||
|
||||
Checkout a release tag:
|
||||
|
||||
```shell
|
||||
git checkout v24.05
|
||||
```
|
||||
|
||||
Build and install lightning:
|
||||
|
||||
```shell
|
||||
./configure
|
||||
make
|
||||
@@ -220,14 +199,12 @@ sudo make install
|
||||
```
|
||||
|
||||
Running lightning (mainnet):
|
||||
|
||||
```shell
|
||||
bitcoind &
|
||||
lightningd --network=bitcoin
|
||||
```
|
||||
|
||||
Running lightning on testnet:
|
||||
|
||||
```shell
|
||||
bitcoind -testnet &
|
||||
lightningd --network=testnet
|
||||
@@ -236,10 +213,9 @@ lightningd --network=testnet
|
||||
## To Build on FreeBSD
|
||||
|
||||
OS version: FreeBSD 11.1-RELEASE or above
|
||||
|
||||
```shell
|
||||
pkg install git python py39-pip gmake libtool gmp sqlite3 postgresql13-client gettext autotools lowdown libsodium
|
||||
https://github.com/ElementsProject/lightning.git
|
||||
git clone https://github.com/ElementsProject/lightning.git
|
||||
pip install --upgrade pip
|
||||
pip3 install mako
|
||||
./configure
|
||||
@@ -248,13 +224,11 @@ gmake install
|
||||
```
|
||||
|
||||
Alternatively, Core Lightning is in the FreeBSD ports, so install it as any other port (dependencies are handled automatically):
|
||||
|
||||
```shell
|
||||
# pkg install c-lightning
|
||||
```
|
||||
|
||||
If you want to compile locally and fiddle with compile time options:
|
||||
|
||||
```shell
|
||||
# cd /usr/ports/net-p2p/c-lightning && make install
|
||||
```
|
||||
@@ -269,9 +243,7 @@ Running lightning:
|
||||
|
||||
Configure bitcoind, if not already: add `rpcuser=<foo>` and `rpcpassword=<bar>` to `/usr/local/etc/bitcoin.conf`, maybe also `testnet=1`.
|
||||
|
||||
Configure lightningd: copy `/usr/local/etc/lightningd-bitcoin.conf.sample` to
|
||||
`/usr/local/etc/lightningd-bitcoin.conf` and edit according to your needs.
|
||||
|
||||
Configure lightningd: copy `/usr/local/etc/lightningd-bitcoin.conf.sample` to `/usr/local/etc/lightningd-bitcoin.conf` and edit according to your needs.
|
||||
```shell
|
||||
# service bitcoind start
|
||||
# service lightningd start
|
||||
@@ -283,7 +255,6 @@ Configure lightningd: copy `/usr/local/etc/lightningd-bitcoin.conf.sample` to
|
||||
OS version: OpenBSD 7.3
|
||||
|
||||
Install dependencies:
|
||||
|
||||
```shell
|
||||
pkg_add git python gmake py3-pip libtool gettext-tools
|
||||
pkg_add automake # (select highest version, automake1.16.2 at time of writing)
|
||||
@@ -291,7 +262,6 @@ pkg_add autoconf # (select highest version, autoconf-2.69p2 at time of writing)
|
||||
```
|
||||
|
||||
Install `mako` otherwise we run into build errors:
|
||||
|
||||
```shell
|
||||
pip3 install --user poetry
|
||||
poetry install
|
||||
@@ -302,7 +272,6 @@ Add `/home/<username>/.local/bin` to your path:
|
||||
`export PATH=$PATH:/home/<username>/.local/bin`
|
||||
|
||||
Needed for `configure`:
|
||||
|
||||
```shell
|
||||
export AUTOCONF_VERSION=2.69
|
||||
export AUTOMAKE_VERSION=1.16
|
||||
@@ -316,7 +285,6 @@ Finally, build `c-lightning`:
|
||||
## To Build on NixOS
|
||||
|
||||
Use nix-shell launch a shell with a full Core Lightning dev environment:
|
||||
|
||||
```shell
|
||||
nix-shell -Q -p gdb sqlite autoconf git clang libtool sqlite autoconf \
|
||||
autogen automake gmp zlib gettext libsodium poetry 'python3.withPackages (p: [p.bitcoinlib])' \
|
||||
@@ -333,7 +301,9 @@ First confirm which architecture of Mac you are running
|
||||
```shell
|
||||
arch
|
||||
```
|
||||
|
||||
If you see this result: `arm64`
|
||||
|
||||
Continue with these instructions. If you see any other result switch to Build on macOS Intel instructions.
|
||||
|
||||
Confirm you are using Apple Silicon Homebrew
|
||||
@@ -341,18 +311,20 @@ Confirm you are using Apple Silicon Homebrew
|
||||
which brew
|
||||
which pkg-config
|
||||
```
|
||||
|
||||
If you see this result:
|
||||
```
|
||||
/opt/homebrew/bin/brew
|
||||
/opt/homebrew/bin/pkg-config
|
||||
```
|
||||
|
||||
You are using Apple Silicon Homebrew and can continue with the instructions, skip to "Install dependencies"
|
||||
|
||||
If you see this in the result: `/usr/local/bin/brew`
|
||||
|
||||
You are using brew in Intel compatibility mode. The simplest solution is to remove brew entirely, reinstall it, and start these instructions over.
|
||||
|
||||
Install dependencies:
|
||||
|
||||
```shell
|
||||
brew install autoconf automake libtool python3 gnu-sed gettext libsodium protobuf lowdown pkgconf openssl
|
||||
export PATH="/opt/homebrew/opt/:$PATH"
|
||||
@@ -361,13 +333,11 @@ export LIBRARY_PATH=/opt/homebrew/lib
|
||||
```
|
||||
|
||||
If you need SQLite (or get a SQLite mismatch build error):
|
||||
|
||||
```shell
|
||||
brew install sqlite
|
||||
```
|
||||
|
||||
Install uv for Python dependency management:
|
||||
|
||||
```shell
|
||||
curl -LsSf https://astral.sh/uv/install.sh | sh
|
||||
```
|
||||
@@ -375,7 +345,6 @@ curl -LsSf https://astral.sh/uv/install.sh | sh
|
||||
After installing uv, restart your shell or run `source ~/.zshrc` to ensure `uv` is in your PATH.
|
||||
|
||||
If you don't have bitcoind installed locally you'll need to install that as well:
|
||||
|
||||
```shell
|
||||
brew install boost cmake pkg-config libevent
|
||||
git clone https://github.com/bitcoin/bitcoin
|
||||
@@ -386,27 +355,23 @@ cmake --install build --component bitcoind && cmake --install build --component
|
||||
```
|
||||
|
||||
Clone lightning:
|
||||
|
||||
```shell
|
||||
git clone https://github.com/ElementsProject/lightning.git
|
||||
cd lightning
|
||||
```
|
||||
|
||||
Checkout a release tag:
|
||||
|
||||
```shell
|
||||
git checkout v24.05
|
||||
```
|
||||
|
||||
Build lightning:
|
||||
|
||||
```shell
|
||||
uv sync --all-extras --all-groups --frozen
|
||||
./configure
|
||||
```
|
||||
|
||||
If you see `/usr/local` in the log, an Intel compatability dependency has been picked up. The simplest solution is to remove brew entirely, reinstall it, and start these instructions over.
|
||||
|
||||
```shell
|
||||
uv run make
|
||||
```
|
||||
@@ -415,8 +380,7 @@ Running lightning:
|
||||
|
||||
> 📘
|
||||
>
|
||||
> Edit your `~/Library/Application\ Support/Bitcoin/bitcoin.conf`to include `rpcuser=<foo>` and `rpcpassword=<bar>` first, you may also need to include `testnet=1`.
|
||||
|
||||
> Edit your `~/Library/Application\ Support/Bitcoin/bitcoin.conf` to include `rpcuser=<foo>` and `rpcpassword=<bar>` first, you may also need to include `testnet=1`.
|
||||
```shell
|
||||
bitcoind &
|
||||
./lightningd/lightningd &
|
||||
@@ -424,12 +388,11 @@ bitcoind &
|
||||
```
|
||||
|
||||
To install the built binaries into your system, you'll need to run `make install`:
|
||||
|
||||
```shell
|
||||
make install
|
||||
```
|
||||
|
||||
You may need to use this command instead. Confirm the exported PATH, CPATH, and LIBRARY_PATH environment varaibles set earlier are still present.
|
||||
You may need to use this command instead. Confirm the exported PATH, CPATH, and LIBRARY_PATH environment variables set earlier are still present.
|
||||
```shell
|
||||
sudo make install
|
||||
```
|
||||
@@ -439,7 +402,6 @@ sudo make install
|
||||
Assuming you have Xcode and Homebrew installed.
|
||||
|
||||
Install dependencies:
|
||||
|
||||
```shell
|
||||
brew install autoconf automake libtool python3 gnu-sed gettext libsodium protobuf lowdown pkgconf openssl
|
||||
export PATH="/usr/local/opt/:$PATH"
|
||||
@@ -448,13 +410,11 @@ export LIBRARY_PATH=/usr/local/lib
|
||||
```
|
||||
|
||||
If you need SQLite (or get a SQLite mismatch build error):
|
||||
|
||||
```shell
|
||||
brew install sqlite
|
||||
```
|
||||
|
||||
Install uv for Python dependency management:
|
||||
|
||||
```shell
|
||||
curl -LsSf https://astral.sh/uv/install.sh | sh
|
||||
```
|
||||
@@ -462,7 +422,6 @@ curl -LsSf https://astral.sh/uv/install.sh | sh
|
||||
After installing uv, restart your shell or run `source ~/.zshrc` to ensure `uv` is in your PATH.
|
||||
|
||||
If you don't have bitcoind installed locally you'll need to install that as well:
|
||||
|
||||
```shell
|
||||
brew install boost cmake pkg-config libevent
|
||||
git clone https://github.com/bitcoin/bitcoin
|
||||
@@ -473,20 +432,17 @@ cmake --install build --component bitcoind && cmake --install build --component
|
||||
```
|
||||
|
||||
Clone lightning:
|
||||
|
||||
```shell
|
||||
git clone https://github.com/ElementsProject/lightning.git
|
||||
cd lightning
|
||||
```
|
||||
|
||||
Checkout a release tag:
|
||||
|
||||
```shell
|
||||
git checkout v24.05
|
||||
```
|
||||
|
||||
Build lightning:
|
||||
|
||||
```shell
|
||||
uv sync --all-extras --all-groups --frozen
|
||||
./configure
|
||||
@@ -497,8 +453,7 @@ Running lightning:
|
||||
|
||||
> 📘
|
||||
>
|
||||
> Edit your `~/Library/Application\ Support/Bitcoin/bitcoin.conf`to include `rpcuser=<foo>` and `rpcpassword=<bar>` first, you may also need to include `testnet=1`.
|
||||
|
||||
> Edit your `~/Library/Application\ Support/Bitcoin/bitcoin.conf` to include `rpcuser=<foo>` and `rpcpassword=<bar>` first, you may also need to include `testnet=1`.
|
||||
```shell
|
||||
bitcoind &
|
||||
./lightningd/lightningd &
|
||||
@@ -506,7 +461,6 @@ bitcoind &
|
||||
```
|
||||
|
||||
To install the built binaries into your system, you'll need to run `make install`:
|
||||
|
||||
```shell
|
||||
make install
|
||||
```
|
||||
@@ -514,21 +468,18 @@ make install
|
||||
## To Build on Arch Linux
|
||||
|
||||
Install dependencies:
|
||||
|
||||
```shell
|
||||
pacman --sync autoconf automake gcc git make python-pip
|
||||
pip install --user poetry
|
||||
```
|
||||
|
||||
Clone Core Lightning:
|
||||
|
||||
```shell
|
||||
git clone https://github.com/ElementsProject/lightning.git
|
||||
cd lightning
|
||||
```
|
||||
|
||||
Build Core Lightning:
|
||||
|
||||
```shell
|
||||
python -m poetry install
|
||||
./configure
|
||||
@@ -536,18 +487,17 @@ python -m poetry run make
|
||||
```
|
||||
|
||||
Launch Core Lightning:
|
||||
|
||||
```
|
||||
./lightningd/lightningd
|
||||
```
|
||||
|
||||
## To cross-compile for Android
|
||||
|
||||
Make a standalone toolchain as per <https://developer.android.com/ndk/guides/standalone_toolchain.html>.
|
||||
Make a standalone toolchain as per https://developer.android.com/ndk/guides/standalone_toolchain.html.
|
||||
|
||||
For Core Lightning you must target an API level of 24 or higher.
|
||||
|
||||
Depending on your toolchain location and target arch, source env variables such as:
|
||||
|
||||
```shell
|
||||
export PATH=$PATH:/path/to/android/toolchain/bin
|
||||
# Change next line depending on target device arch
|
||||
@@ -561,7 +511,6 @@ export STRIP=$target_host-strip
|
||||
```
|
||||
|
||||
Two makefile targets should not be cross-compiled so we specify a native CC:
|
||||
|
||||
```shell
|
||||
make CC=clang clean ccan/tools/configurator/configurator
|
||||
make clean -C ccan/ccan/cdump/tools \
|
||||
@@ -569,9 +518,10 @@ make clean -C ccan/ccan/cdump/tools \
|
||||
```
|
||||
|
||||
Install the `qemu-user` package.
|
||||
This will allow you to properly configure the build for the target device environment.
|
||||
Build with:
|
||||
|
||||
This will allow you to properly configure the build for the target device environment.
|
||||
|
||||
Build with:
|
||||
```shell
|
||||
BUILD=x86_64 MAKE_HOST=arm-linux-androideabi \
|
||||
make PIE=1 \
|
||||
@@ -583,7 +533,6 @@ BUILD=x86_64 MAKE_HOST=arm-linux-androideabi \
|
||||
Obtain the [official Raspberry Pi toolchains](https://github.com/raspberrypi/tools). This document assumes compilation will occur towards the Raspberry Pi 3 (arm-linux-gnueabihf as of Mar. 2018).
|
||||
|
||||
Depending on your toolchain location and target arch, source env variables will need to be set. They can be set from the command line as such:
|
||||
|
||||
```shell
|
||||
export PATH=$PATH:/path/to/arm-linux-gnueabihf/bin
|
||||
# Change next line depending on specific Raspberry Pi device
|
||||
@@ -596,10 +545,9 @@ export LD=$target_host-ld
|
||||
export STRIP=$target_host-strip
|
||||
```
|
||||
|
||||
Install the `qemu-user` package. This will allow you to properly configure the
|
||||
build for the target device environment.
|
||||
Config the arm elf interpreter prefix:
|
||||
Install the `qemu-user` package. This will allow you to properly configure the build for the target device environment.
|
||||
|
||||
Config the arm elf interpreter prefix:
|
||||
```shell
|
||||
export QEMU_LD_PREFIX=/path/to/raspberry/arm-bcm2708/arm-rpi-4.9.3-linux-gnueabihf/arm-linux-gnueabihf/sysroot/
|
||||
```
|
||||
@@ -607,7 +555,6 @@ export QEMU_LD_PREFIX=/path/to/raspberry/arm-bcm2708/arm-rpi-4.9.3-linux-gnueabi
|
||||
Obtain and install cross-compiled versions of sqlite3 and zlib:
|
||||
|
||||
Download and build zlib:
|
||||
|
||||
```shell
|
||||
wget https://zlib.net/fossils/zlib-1.2.13.tar.gz
|
||||
tar xvf zlib-1.2.13.tar.gz
|
||||
@@ -618,7 +565,6 @@ make install
|
||||
```
|
||||
|
||||
Download and build sqlite3:
|
||||
|
||||
```shell
|
||||
wget https://www.sqlite.org/2018/sqlite-src-3260000.zip
|
||||
unzip sqlite-src-3260000.zip
|
||||
@@ -629,7 +575,6 @@ make install
|
||||
```
|
||||
|
||||
Then, build Core Lightning with the following commands:
|
||||
|
||||
```
|
||||
./configure
|
||||
make
|
||||
@@ -641,13 +586,11 @@ For all the other Pi devices out there, consider using [Armbian](https://www.arm
|
||||
|
||||
You can compile in `customize-image.sh` using the instructions for Ubuntu.
|
||||
|
||||
A working example that compiles both bitcoind and Core Lightning for Armbian can
|
||||
be found [here](https://github.com/Sjors/armbian-bitcoin-core).
|
||||
A working example that compiles both bitcoind and Core Lightning for Armbian can be found [here](https://github.com/Sjors/armbian-bitcoin-core).
|
||||
|
||||
## To compile for Alpine
|
||||
|
||||
Get dependencies:
|
||||
|
||||
```shell
|
||||
apk update
|
||||
apk add --virtual .build-deps ca-certificates alpine-sdk autoconf automake git libtool \
|
||||
@@ -655,7 +598,6 @@ sqlite-dev python3 py3-mako net-tools zlib-dev libsodium gettext
|
||||
```
|
||||
|
||||
Clone lightning:
|
||||
|
||||
```shell
|
||||
git clone https://github.com/ElementsProject/lightning.git
|
||||
cd lightning
|
||||
@@ -663,7 +605,6 @@ git submodule update --init --recursive
|
||||
```
|
||||
|
||||
Build and install:
|
||||
|
||||
```shell
|
||||
./configure
|
||||
make
|
||||
@@ -671,20 +612,18 @@ make install
|
||||
```
|
||||
|
||||
Clean up:
|
||||
|
||||
```shell
|
||||
cd .. && rm -rf lightning
|
||||
apk del .build-deps
|
||||
```
|
||||
|
||||
Install runtime dependencies:
|
||||
|
||||
```shell
|
||||
apk add libgcc libsodium sqlite-libs zlib
|
||||
```
|
||||
|
||||
## Python plugins
|
||||
|
||||
Python plugins will be installed with the `poetry install` step mentioned above fron development setup.
|
||||
Python plugins will be installed with the `poetry install` step mentioned above from development setup.
|
||||
|
||||
Other users will need some Python packages if python plugins are used. Unfortunately there are some Python packages which are not packaged in Ubuntu, and so force installation will be needed (Flag `--user` is recommended which will install them in user's own .local directory, so at least the risk of breaking Python globally can be avoided!).
|
||||
Other users will need some Python packages if python plugins are used. Unfortunately there are some Python packages which are not packaged in Ubuntu, and so forced installation will be needed (Flag `--user` is recommended which will install them in user's own .local directory, so at least the risk of breaking Python globally can be avoided!).
|
||||
|
||||
File diff suppressed because one or more lines are too long
Reference in New Issue
Block a user