01 / 14
رصين rseen.ai Risk. Seen.

Sovereign AI for analytical documents — built in Saudi, run in Saudi.

Bank-grade drafts in minutes, every figure traceable to its source, and nothing leaves the Kingdom.

Client Pitch · Pre-Pilot
A product by
The problem

The RM's week.

Writing a CRC memo today takes 2–4 days. Then 1–2 more for rework.

What that actually looks like
  • Pull 7+ artefacts: facility register, SIMAH bureau, collateral valuation, DBR schedule, AML / PEP / sanctions screens, IS, BS
  • Hand-compute Coverage, DBR (pre + post), LTV (market + forced-sale), DSCR vs SAMA minimum
  • Write 10–12 sections in whatever structure the RM prefers this week
  • Build stress scenarios — or skip them
  • Cross-cite every number — or don't, and let CRC chase
  • Respond to CRC challenges section by section
Most of that time doesn't go on writing. It goes on chasing the source for every figure — and doing it while client data stays inside the Kingdom.
The moat

Why general-purpose cloud AI can't solve this.

01

Data residency

Client financials cannot leave the Kingdom. PDPL and SAMA's outsourcing framework make this a hard wall, not a preference.

02

Numerical hallucination

Generic LLMs invent ratios and cite sources that don't exist. A CRC member will find it in 30 seconds.

03

No audit trail

There is no way to click a number in the output and see the source document chunk it came from.

04

Generic structure

The output reads like a business school essay, not a SAMA-aligned credit memo.

Rseen was built to sit inside these four constraints from day one, not work around them.
What Rseen is

A sovereign AI that drafts defensible analytical documents from your source materials — end to end, in minutes, with every figure anchored to a citation.

This deck walks through one use case — an RM drafting a credit memorandum for a Credit Risk Committee. The same engine handles almost any structured document your team produces: policy reviews, investment committee papers, audit memos, regulatory filings.

Rseen grew out of GulfBoost's internal automation for ERP implementations, where it has been producing technical documents, specifications, and migration reports for over a year. It was retargeted for Saudi financial services in 2026.

What it produces

A complete CRC-ready credit memorandum.

Eleven sections. Every ratio computed from the source, and traceable back to it.

Sections generated in the benchmark run
  • Executive Recommendation
  • Client Profile & Facility Summary
  • Consolidated Balance Sheet (3-year)
  • Income Statement & Cash Flow Analysis
  • Credit Risk Assessment
  • Collateral Inventory & Coverage
  • Decision Table (facility-level)
  • Stress Test — 7 scenarios
  • SAMA Compliance Check (KYC / AML / Sanctions)
  • Conditions Precedent & Financial Covenants
  • Monitoring Triggers
Output: a branded DOCX, ready for CRC distribution — alongside a machine-readable audit trail.
Sample · Recommendation section

Computed from the source, cited back to it.

0 ×
Collateral coverage
Cited to the collateral valuation document
0 %
DBR (post-facility)
Under SAMA 55% ceiling, cited
0 %
LTV (market basis)
Under SAMA 80% monitoring trigger, cited
DSCR · Breach — below SAMA minimum
Rseen surfaces the breach in the recommendation itself, instead of leaving it for CRC to find.

Facility Recommendation

The proposed SAR 40.0M murabaha facility is supported by collateral coverage of 1.875× on a forced-sale basis¹, with loan-to-value at 38.65% (market basis)² — both comfortably within SAMA guidance.

Debt service burden post-facility stands at 30.5%³, below the SAMA ceiling of 55%. SIMAH score of 680 is within acceptable range for Tier 2 approval.

Breach   Debt-service coverage ratio (EBITDA-based) falls below the SAMA minimum of 1.25×. Mitigation via monthly covenant testing and collateral top-up trigger recommended.

Sources
¹Collateral Valuation Report · Annex B, p. 14 · "forced-sale value SAR 75.0M"
²Collateral Valuation Report · p. 7 · "market value SAR 103.5M"
³DBR Schedule · FY2025 · net income basis

Click any superscript to jump to its endnote and source passage.

How it works

Four stages, fully local to the Kingdom.

1 2 3 4

Ingest

RM uploads facility register, financials, SIMAH, collateral, AML screens, KYC, policy thresholds. XLSX, PDF, DOCX, TXT.

Extract & cite

Rseen indexes every document, then extracts 150+ discrete findings, each anchored to its source chunk. On this run: 180 findings from 6 input files.

Generate sections

Rseen drafts each section using only the extracted findings plus a SAMA / IFRS knowledge base. Ratios are deterministically computed, not hallucinated. Citations injected inline.

Audit & export

Citation integrity, cross-section consistency, and contradiction checks run before export. Output is a branded DOCX plus an audit report.

End-to-end: minutes, not days.
Evidence & citations

The audit pane.

Click any superscript in the memo and jump to its endnote and source passage.

Generated memo

The proposed facility is supported by collateral coverage of 1.875×¹ on a forced-sale basis, with loan-to-value at 38.65% — both within SAMA guidance.

Source document

Collateral schedule Annex B · Industrial land, forced-sale value SAR 75.0M; facility amount SAR 40.0M. Coverage = 75.0 / 40.0 = 1.875.

What this replaces
  • RM defending a ratio to CRC — now 1 click instead of 15 minutes of paper shuffling
  • CRC challenging a claim — now traced, not argued
  • Regulator review — every output has a complete audit trail from Day 1
0
Phantom citations across 11 sections
Measured on the benchmark run, not claimed. Every cite resolves to a real source chunk.
Stress testing

Seven scenarios. No N/A cells.

The section RMs skip most often is the one CRC scrutinises most. Rseen fills it in.

Scenario analysis — EBITDA × rate shock
Scenario EBITDA DSCR DBR
Base case7.2M1.10×30.5%
EBITDA −10%6.5M0.99×33.9%
EBITDA −20%5.8M0.88×38.1%
Rate +100bp7.2M1.04×32.3%
Rate +200bp7.2M0.98×34.2%
Combined mild6.5M0.94×35.9%
Combined severe5.8M0.80×40.4%
  • 7 scenarios, base case pinned to the authoritative metrics block
  • 0 all-N/A rows — every cell computed from source data
  • Scenario math reconciles across Recommendation, Cash Flow, and Stress sections
Numbers in the stress test still reconcile with the Recommendation and Cash Flow three pages later. In our experience, that isn't common.
SAMA compliance

Compliance is a full section, with its own citations.

  • 24 AML / PEP / sanctions citations, each anchored to the screening document
  • KYC completeness scored against policy
  • Deterministic scanner for SIMAH score routing — no LLM guessing
  • Policy thresholds (DBR 55%, LTV 80%, DSCR 1.25×) tagged explicitly as Policy-default (SAMA) when used in monitoring triggers
Compliance is the hardest section to automate credibly — which is also why it's the hardest to fake.
0
Anchored AML citations
0
SIMAH score · deterministic
Sovereignty

Sovereign by architecture.

Data never leaves the Kingdom. Not a configuration option — how the system is built.

Saudi-hosted models

LLM inference on Saudi-hosted models only — open-weight and sovereign frontier options. No international provider calls in the generation path.

On your boundary

On-premise, in Saudi sovereign cloud, or inside your existing private infrastructure. Your VMs, your network, your keys.

PDPL-aligned

Designed for PDPL and compatible with the SAMA outsourcing framework posture. No training on client data — ever. Contractual.

If it runs in the Kingdom and your data stays in the Kingdom, the compliance conversation gets shorter.
Deployment

Three realistic deployment models.

A

On-premise

Full install inside your data center. Your VMs, your network, your keys.

B

Saudi sovereign cloud

Single-tenant instance hosted in the Kingdom. We operate, you own the data boundary.

C

Any combination

Models on your GPUs or Saudi-hosted. Application in your data centre or in sovereign cloud. Data wherever your policy keeps it. The three choices are independent — pick each separately and we'll wire it together.

Integration surface
Web UI

RMs · EN + AR · full RTL

REST API

Workflow embedding · DOCX + JSON out

Auth

SSO (SAML / OIDC) · TOTP 2FA

DB-driven admin

Doc types · prompts · thresholds

Honest framing

We don't claim to replace the RM or the CRC.

What we take off their plate is the mechanical work — reconciliation, computation, citation. The judgment stays with them.

~ 0
Reviewer claims per memo
Across 11 sections on the benchmark run
Useful draft
ship-adjacent
Tier on the benchmark run
Logged &
version-tracked
Every claim, every release
We publish our own scorecard because CRC members will ask for one.
The ask

A 4–6 week pilot that ends with Rseen tuned to your playbook.

You share
  • Your credit memorandum templates and the section structure your CRC expects
  • Internal credit policy, underwriting procedures, and covenant language
  • The regulatory documents you operate under — SAMA circulars, internal mappings, compliance guides
  • About two hours a week from one RM and one CRC member
You get
  • A Rseen instance tuned to your playbook — your templates, thresholds, covenants, and compliance terminology
  • Sample memos generated in your format, reviewed against your CRC's real challenge patterns
  • Deployment sizing for your infrastructure
  • No obligation to proceed
A product by
Appendix · A1
Beyond credit

Document types are DB-defined. Not hardcoded.

Credit memoranda are one application. The same engine produces any document type you define.

Credit assessment

Bank-grade CRC memos — the benchmark use case. 11 sections, SAMA-aligned.

SAMA compliance review

Policy alignment reviews against SAMA circulars. 7 sections, regulator-ready structure.

Any type you define

Admins create new document types in the UI: sections, prompts, citation patterns, policy thresholds — all editable. No code change.

Appendix · A2
Security posture

Built for procurement review.

  • SSO via SAML / OIDC
  • TOTP 2FA for every account
  • Role-based access control
  • Full audit logs · user, action, timestamp
  • Encryption at rest and in transit
  • Secret management · no secrets in code
  • Network isolation · no egress to external LLM APIs
  • No training on client data · contractual
Appendix · A3
Model flexibility

Bring your own. Or use ours. Or run air-gapped.

Saudi-hosted

Sovereign frontier and open-weight models, inference inside the Kingdom.

Your GPU

Bring your own model on your hardware. Rseen is model-agnostic via an OpenAI-compatible interface.

Fully air-gapped

Inference + embedding + UI all inside your network. No outbound connectivity required.

Model choice is a configuration, not a lock-in. You keep the option to switch as Saudi-hosted options evolve.

Appendix · A4
Architecture · for your CIO

Four logical tiers. All inside your boundary.

Ingest

Doc parsers (XLSX, PDF, DOCX, TXT). Chunking + embedding. Vector store (pgvector).

Extract

Multi-agent extraction graph. Deterministic scanners (DBR, LTV, SIMAH). Finding dedup + period-aware aggregation.

Generate

DB-driven section prompts. Retrieval over findings + KB. Inline citation injection. Language control (EN / AR).

Audit

Citation integrity validator. Cross-section consistency. Contradiction + placeholder detector. DOCX + JSON out.

Python · FastAPI · PostgreSQL + pgvector · Next.js UI · Docker Compose or Kubernetes. Runs on 1 GPU for local inference or CPU-only with Saudi-hosted inference.

Appendix · A5
Roadmap · light pre-pilot

What's next — shaped by pilot feedback.

We hold the roadmap deliberately light until pilots tell us what matters most.

  • Pilot hardening — close reviewer-claim gaps identified on live files
  • Institution-specific playbooks — your thresholds, your structure, your covenant templates
  • Arabic output parity — full RTL memo generation with terminology anchored to your glossary
  • Additional document types — SAMA compliance review, policy docs, and whatever your team asks for next
  • Core banking integration — direct ingestion from your source systems, scoped per pilot