Self-hosted systems, public demos, private model stack

Lintel Labs is where I ship applied AI work without pretending the infrastructure does not matter.

I'm Izzet Abidi. I build deployment-grade coursework, self-hosted AI applications, and the longer-running platform work that sits under the ASHTON codename. This site is the public surface of that work.

Current focus Coursework that deploys like product work
Runtime split Static site on Cloudflare, apps on owned compute
Model policy Public UI, private vLLM and internal services

About

A little more serious than a portfolio, less vague than a personal brand page.

I care about systems that can survive contact with real runtime constraints: GPU limits, network boundaries, deployment friction, audit trails, and the difference between a local demo and a public-facing surface.

The through-line across this work is the same whether the project starts as coursework or platform R&D: build a narrow slice, make it observable, deploy it honestly, and keep the interfaces readable as the system grows.

How I build

Proof over posture

I prefer bounded capability, clean interfaces, and deployable truth over broad claims or hand-wavy platform language.

Current stack

Owned infra first

Talos, Flux, Cloudflare, vLLM, LangGraph, Postgres, and Qdrant are already in the loop. The landing page is static; the applications are not.

Public identity

Lintel outside, ASHTON inside

Lintel is the public-facing name. ASHTON is the internal codename for the platform architecture that ties the longer-term work together.

Selected Work

Live projects with real deployment shape behind them.

Live now summarizer.lintellabs.net

Summarizer

An extractive summarization project that combines TextRank and MMR to reduce redundancy while keeping the important sentences in view. The public app is narrow by design; the model and runtime services remain private.

  • TextRank + MMR summarization pipeline for news-style input.
  • Prometheus-monitored deployment rather than a classroom-only demo.
  • Public surface is intentionally thinner than the internal platform.
Live now rag.lintellabs.net

Literary RAG Lab

A retrieval-augmented generation system over Dostoevsky and Nietzsche that uses FAISS retrieval, Sentence-Transformers embeddings, and a private vLLM backend. The point is not just retrieval quality, but a public grading surface that actually works.

  • Corpus fetch, cleanup, chunking, embedding, and exact-vector retrieval.
  • Grounded answers with citations and visible retrieved passages.
  • Hosted so the app can reach vLLM even when the reviewer cannot.
Live now fraud.lintellabs.net

Fraud Sentinel

A fraud-detection system with a PyTorch model, FastAPI inference layer, SvelteKit dashboard, LangGraph review flow, and RAG-grounded analyst brief. The LLM explains; it does not make the fraud decision.

  • Classifier plus anomaly signal for transaction scoring.
  • Analyst queue, case workflow, and grounded review context.
  • Deployed as a proper app slice rather than a one-off UI mock.

Platform Direction

ASHTON is the longer arc: modular services, narrow reads, honest boundaries.

The live course projects sit on top of a larger systems direction. Under the ASHTON codename, the work is split into bounded services instead of one shapeless app: physical truth, member truth, operator reads, shared contracts, and a GitOps deployment substrate.

I am not trying to fake a finished platform. The goal is to prove one real capability at a time and keep the system understandable as it expands.

Physical truth

ATHENA

Presence, occupancy, edge ingress, and bounded live deployment proof.

Member truth

APOLLO

Auth, profile state, explicit lobby membership, workouts, and deterministic previews.

Staff reads

HERMES

Read-only operational surfaces over stable upstream truth.

Deployment substrate

Prometheus

Talos, Flux, observability, private model services, and the runtime behind these demos.

Working Style

The bar is simple: working code, clean boundaries, credible deployment.

Narrow slices

One deployed capability is more valuable than a wide roadmap with no proof behind it.

Observable first

Metrics, traces, and explicit health surfaces matter before the system is considered real.

Infrastructure honesty

Public URLs are allowed to be simple. The runtime behind them should still be disciplined.

Readable growth

Projects should become more coherent as they expand, not more mysterious.