INGEST
Import XMP folders or Lightroom catalogs as the source material for a style pack.
[REDLNX//LOCAL_AI_PHOTO_ENGINE]
Local AI photo post-production for photographers tired of renting their own taste. Built solo in Rust + ONNX, AGPL-3.0, offline by design. Train on real Lightroom edits, preview the develop deltas, batch-write XMP sidecars on your own machine.
RedLnx automates develop settings locally. It does not pretend to replace Lightroom's renderer; it lives next to your existing workflow.
A RAW file is not a photo. It is a person. Faces of the people who chose you as their photographer, GPS of the places you frequent, the hours you work, the contexts of the scenes you tell, the editing habits you have built over years. That is what leaves your machine when you upload a RAW to a cloud editing service.
That material lives on servers you cannot audit, often in jurisdictions you never consented to, often next to value chains (proprietary model training, biometric indexing, surveillance datasets) whose terms you never signed.
RedLnx exists because there is another way.
Local. Auditable. Free. Forever.
RedLnx is a local Rust + ONNX desktop app for photographers. It ingests real edits, filters obvious bad sources, adapts to your style, and writes local XMP outputs without a cloud handoff.
Import XMP folders or Lightroom catalogs as the source material for a style pack.
Skip neutral sources, duplicate or derived sources, and Adobe AI / Adaptive Color poison markers before training.
Build a Local AI style model from real edits you already trust, instead of starting from a generic preset.
Write local XMP sidecars for one style or multiple styles in dedicated output folders.
The pipeline is concrete: train from real edits, inspect the develop deltas, then process a batch with one look or several.
Import XMP folders or Lightroom catalogs. RedLnx builds a local style profile from your real edits and filters obvious poisoned or neutral sources before training.
Inspect before/after slider deltas on sample photos, compare style behavior, and choose a representative cover image for the style pack.
Write local XMP sidecars for one style or multiple styles, fully offline, with dedicated output folders per style.
[ The Liberation Counter ]
A personal screen inside the app. Counts the hours of life RedLnx gave back compared with editing by hand. Counts the money you did not pay to incumbents. And, beneath everything, four zeros that name what did not leave your machine.
Faces, coordinates, timestamps, contexts: none of this left this machine. The
number lives in user_profile.json on your disk and is never transmitted.
Four real interface views. Console, Train, Post-Produce, Style Pack.
[ LOCAL / PRIVACY / LICENSING ]
RedLnx is a native Rust desktop app with local storage and predictable offline execution. Once installed, training, inference, preview, and batch output stay on your hardware.
The workflow is Lightroom-oriented and XMP-based. RedLnx automates develop settings locally; Lightroom still handles final rendering.
The project is AGPLv3, open source, and free to download, with no telemetry, no tracker scripts, no subscriptions, no locked feature tiers, no account.
AGPLv3 is the trap. A corporation that wraps RedLnx into a proprietary cloud SaaS has to release the entire stack under the same terms. The license is the doctrine, written in legal code.
The plan is public. Status icons: ✓ shipped, ⟳ in progress, ○ planned. Priorities shift with community feedback.
Clustered local inference across a photographer's own LAN (no central server, peer machines share load). Federated training on free and open datasets (Wikimedia Commons, public-domain catalogues), never on user-uploaded RAWs, with differential-privacy weight aggregation. Sovereignty extended to community-grown, without compromising the original promise.
[ Honest Funding ]
RedLnx stays free forever. No subscription, no paywall, no locked tier. If you want to help fund the next twelve months of operating costs and the remaining release work, a Kickstarter is coming in a couple of weeks. Until then, Ko-fi is open.
Where the money never goes: paid marketing, growth hacking, paid acquisition. Centralized inference servers. Account systems. Telemetry. Analytics. A "founder" badge. A "premium" tier. Any of it.
[ Live in ~2 weeks · post-Kickstarter ]
When the pre-launch Kickstarter closes, the names of supporters who opt in will appear here. Single mono list, alphabetical, no tiers, no badges, no perks. Recognition only. Pseudonyms welcome.
RedLnx writes local XMP sidecars for Lightroom-oriented workflows. It automates develop settings; it is not a final raw renderer by itself.
You can train from XMP folders or Lightroom catalogs. RedLnx profiles your existing edits and builds a Local AI adaptive style model.
Yes. RedLnx already skips neutral sources, duplicate or derived sources, and Adobe AI / Adaptive Color poison markers before training.
Yes. RedLnx can write separate outputs for multiple styles in dedicated folders.
No. CPU works too. On Windows, RedLnx can use CUDA when the NVIDIA runtime is ready, otherwise DirectML or CPU fallback. On macOS, Metal. On Linux, WebGPU / Vulkan. GPU acceleration helps some parts of the pipeline more than others.
Yes. Training, inference, preview, and batch output are local once the app is installed. There is no cloud queue, no upload path, no account.
Training quality depends on the quality and consistency of the source edits. Cleaner source work produces a stronger style model.
No. RedLnx is AGPLv3, open source, and free to download. There is no subscription tier, no locked feature, no premium plan. Every feature is for everyone, forever.
With AI-assisted development tools, openly. The idea sat in my head for years: a local, sovereign editing assistant that simply did not exist. Working four hours a day after the real job, alone, I would never have shipped it without those tools. Every line is reviewed, tested, and decided by me; the architecture, the AGPL choice, the doctrine, and the manifesto are mine. AI tools are instruments, not authors, the way a camera is not the photographer. The codebase is AGPL-3.0, fully open and auditable: if anything does not match what is written here, open an issue. The shipped app runs 100% offline, with no telemetry and no cloud calls; AI helped write the code, it does not run inside it.
A personal screen inside the app that shows how much time RedLnx has given you back compared with editing by hand, alongside four zeros that name what did not leave your machine: files uploaded, faces indexed, coordinates shared, servers contacted. The number lives in user_profile.json on your disk and is never transmitted.
A one-time pre-launch campaign to cover code-signing, hosting, hardware test costs, and development hours. Not a subscription. Not a paywall. Not a tier system. Funded supporters who opt in are listed alphabetically on the Hall of Fame; everyone else still gets the same software, free, forever.
Long term, yes. The vision is federated training on free and open datasets, never on user-uploaded RAWs, with differential-privacy weight aggregation. Clustered local inference across a photographer's own LAN is also on the long-term map. Both are Phase 4 territory.
[ DOWNLOAD / RELEASES ]
Releases ship the desktop app and the local runtime flow. On Windows, auto mode prefers CUDA when the NVIDIA runtime looks compatible, otherwise it falls back to DirectML, then CPU. On macOS, Metal. On Linux, WebGPU / Vulkan.
CPU is supported too. Acceleration helps some parts of the pipeline more than others, and training quality still depends on the consistency of the source edits.