Redis- or SQLite-backed message broker (serverless style catalog + TCP between peers).
The library is written in Rust with a C interface.
Data transfer via TCP.
SQLite: embedded single-file mode (no Redis process). See the guide docs/using-sqlite.md.
Rust example:
use liner_broker::Liner;
fn main() {
let mut client1 = Liner::new("client1", "topic_client1", "localhost:2255", "redis://localhost/");
let mut client2 = Liner::new("client2", "topic_client2", "localhost:2256", "redis://localhost/");
client1.run(Box::new(|_to: &str, _from: &str, _data: &[u8]|{
println!("receive_from {}", _from);
}));
client2.run(Box::new(|_to: &str, _from: &str, _data: &[u8]|{
println!("receive_from {}", _from);
}));
let array = [0; 100];
for _ in 0..10{
client1.send_to("topic_client2", array.as_slice(), true);
println!("send_to client2");
}
}Python example:
def foo():
client1 = liner.Client("client1", "topic_client", "localhost:2255", "redis://localhost/")
client2 = liner.Client("client2", "topic_client", "localhost:2256", "redis://localhost/")
server = liner.Client("server", "topic_server", "localhost:2257", "redis://localhost/")
client1.run(receive_cback1)
client2.run(receive_cback2)
server.run(receive_server)
b = bytearray(b'hello world')
server.send_all("topic_client", b) # optional third arg: at_least_once (default True)
def receive_cback1(to: str, from_: str, data: bytes):
print(f"receive_from {from_}, data: {data}")
def receive_cback2(to: str, from_: str, data: bytes):
print(f"receive_from {from_}, data: {data}")
def receive_server(to: str, from_: str, data: bytes):
print(f"receive_from {from_}, data: {data}")
-
high message bandwidth (benchmark)
-
delivery guarantee: at least once delivery (store-backed: Redis or SQLite)
-
SQLite backend: file per deployment / per process; isolated files = one-to-one only unless you share one DB path or maintain
connection_keymanually (docs/using-sqlite.md) -
message size is not predetermined and is not limited
-
easy api: run client and send data to
-
interface for Python and CPP
-
crossplatform (linux, windows)
-
various messaging options: one-to-one, one-to-many, many-to-many, and topic subscription
- install Rust and Cargo
- execute:
cargo build --release
One to one: Python / CPP / Rust
One to one for many: Python / CPP / Rust
One to many: Python / CPP / Rust
Many to many: Python / CPP / Rust
Producer-consumer: Python / CPP / Rust
Two binaries stress the same pair of clients + send_to workload; only the store differs:
bench_pair_sendto_redis— Redis catalog (redis://localhost/).cargo build --release --bin bench_pair_sendto_redis→./bench_pair_sendto_redis.bench_pair_sendto_sqlite— one shared temp SQLite file for both clients (same idea as one Redis URL), so listener acks and sender reads hit the sameconn_mess_number. Fixed bind addresses; each side’sreceivers_jsonlists only the peer (topic / addr /client_name).cargo build --release --bin bench_pair_sendto_sqlite→./bench_pair_sendto_sqlite.
alex@ubuntu2004:~/projects/rust/liner/target/release$ ./bench_pair_sendto_redis
send_to 8 ms
receive_from 8 ms
send_to 5 ms
receive_from 5 ms
send_to 7 ms
receive_from 3 ms
send_to 11 ms
receive_from 3 ms
send_to 6 ms
receive_from 3 ms
10ms on average for 10k messages
alex@ubuntu2004:~/projects/rust/liner/benchmark/compare_with_zeromq$ make
g++ -Wall -O2 -std=c++17 -g -Wno-write-strings -o compare_with_zmq compare_with_zmq.cpp -lzmq
alex@ubuntu2004:~/projects/rust/liner/benchmark/compare_with_zeromq$ ./compare_with_zmq
Connecting to tcp://127.0.0.1:34079
send_to 20.198 ms
send_to 16.504 ms
send_to 11.5 ms
send_to 13.153 ms
send_to 10.964 ms
send_to 10.788 ms
send_to 10.785 ms
send_to 11.119 ms
send_to 11.348 ms
send_to 10.826 ms
For ZeroMQ it is similar
Run Rust unit tests:
cargo testRun Rust integration test with Redis (ignored by default):
LINER_TEST_REDIS=redis://localhost/ cargo test --test offline_delivery_redis -- --ignored
cargo build --release
python3 test/run_integration.py --listYou can filter or keep running after failures:
python3 test/run_integration.py --only offline,burst
python3 test/run_integration.py --continue-on-failPython tests will auto-start Redis via Docker if it isn't reachable. You can customize the port/container name:
LINER_TEST_REDIS_PORT=16379 LINER_TEST_REDIS_CONTAINER=liner-test-redis python3 test/offline_delivery_more.pySQLite-backed Python integration tests (no Redis; shared temp DB per script):
cargo build --release
python3 test/sqlite/run_integration.py- Using SQLite (
new_sqlite,receivers_json, reference test walkthrough) - Crate API on docs.rs
- Developer notes (errors, backends, C API, lifecycle)
- C API compatibility and building (symbols,
cargo, Linux/Windows)
Licensed under an [MIT-2.0]-license.





