09 / eKYC REVIEW CONSOLE PILOT-READY

The evidence on the left. The decision on the right. The audit trail behind every click.

VerifyOne is the compliance review console Thai banks, lenders, telcos, and government services should already have. Reviewers see the ID scan, selfie, liveness check, and extracted fields on one screen — with AI confidence on every check and a defensible audit trail on every decision. Designed to meet Thai PDPA, BOT-regulated onboarding guidance, and AML/CTF workflow requirements.

Thai national ID handled natively Audit trail on every click Thai data residency option
REVIEW PANEL — NOT A RENDER
02 / THE CHALLENGE

A reviewer's day is spent looking at evidence through the wrong window.

Every bank, consumer lender, telco and government service in Thailand has a compliance team that reviews onboarding applications. The work is not glamorous. A senior analyst opens an application, switches to a second tab to see the ID scan, switches to a third tab to see the liveness capture, pastes the ID number into the watchlist tool, writes the decision in a free-text field, and moves to the next file. They do this several hundred times a day. One of those decisions will be audited by the regulator six months from now, and the trail needs to hold.

A Tuesday afternoon at a mid-sized consumer-finance lender in Bangkok. Queue depth: 312 pending applications. Team on shift: four reviewers, one senior reviewer, one QA analyst. Target SLA: 30 minutes per urgent case, two hours per standard. The team is running eight minutes behind on the urgent queue.

Application APP-10284 arrives. The reviewer opens it, clicks out to the document-viewer tab to see the front of the Thai national ID card, squints at the address line, switches tabs to the internal watchlist search, types in the ID number, waits, sees no match, switches back, opens the selfie in a new tab, opens the liveness video in yet another tab, decides the face match is fine, types "approved — all checks ok" into the notes field, and clicks approve. Ninety seconds, four tabs, one line of free text, no record of what the reviewer actually looked at. Multiply by 300 cases a day.

When the quarterly regulator review comes, the lender cannot reconstruct what the reviewer saw at the moment of approval — only the final decision. Any reviewer who rejected a case with the phrase "looked dodgy" (it happens) has just given the auditor an opening.

01

The review UI is a compromise of three other systems.

Most lenders stitch together a KYC capture SDK, a CRM, a document store, and an in-house spreadsheet. The reviewer sees none of it on one screen. They tab-switch hundreds of times a day, and the cognitive load goes into navigating tools rather than evaluating evidence.

02

The audit trail is a paragraph of free text.

Regulators increasingly want to know exactly what a reviewer saw, what the AI flagged, which fields were verified, and what override reasons were recorded. A notes field captures none of that. When the regulator asks "why was this approved?" the answer has to be reconstructed from Slack messages and memory.

03

Thai national ID handling is an afterthought.

Most global eKYC platforms treat the Thai national ID card as one more document template. The 13-digit format, the Buddhist-calendar date of birth, the Thai-script address line, and the specific anti-tamper markings on the card are treated as localisation edge cases. Thai regulators do not.

"My reviewers spend roughly half their shift in the tools that are supposed to help them. Not on the decision itself. The decision is fast — finding what they need to make it isn't."

Conversation with a head of compliance ops at a Thai consumer lender · February 2026
03 / THE APPROACH

One screen. One decision. One audit entry per click.

VerifyOne is built around a single observation: the reviewer already knows how to make the decision. What slows them down is not judgement — it is the three tabs, the two logins, and the free-text notes field that pretend to be an audit trail. VerifyOne collapses the evidence, the AI scores, the extracted fields, and the decision actions into one console, and captures every click against the regulator's schema automatically.

There is a queue with live SLA countdowns. There is a split-screen review panel with documents on the left and a structured decision panel on the right. There is a rejection modal that forces a reason code. There is a QA layer on top that samples decisions and tracks reviewer accuracy. There is an activity log that records every action ever taken, with model version, AI provider, and timestamp. None of that is optional.

01

Evidence and decision on one screen

The split-screen review panel is the hero screen of the product. ID front, ID back, selfie, liveness, proof of address — all on the left, with zoom, rotate, and full-screen controls. Extracted fields, confidence scores, face-match score, and the decision action bar on the right. The reviewer's eyes never leave the evidence while they make the call.

02

Structured audit trail on every click

Every action — reading the document, correcting a field, approving, rejecting with reason, escalating, requesting re-upload — is a row in the review_histories table. Each row carries the user ID, the timestamp, the IP address, the AI model version, and the before/after values. Regulator asks "what did the reviewer see when they approved APP-10284?" — the answer is already indexed.

03

Built for the Thai stack

Thai national ID card parsing (front and back, 13-digit format, Buddhist-calendar date conversion, Thai-script address line). Watchlist integration with standard categories (fraud, identity theft, document forgery, sanctioned, PEP). Role-based access for the five roles a Thai compliance team actually runs: admin, operations manager, senior reviewer, reviewer, QA analyst. Thai data residency available via on-premises deployment or Thai-hosted VPS.

This isn't a KYC capture SDK. It's the review console that the capture SDK hands evidence to.

04 / THE WORKFLOW

Seven steps, queue to audit-ready decision.

Dashboard · Shift summary
Submitted
486
Pending
312
Approved
147
Rejected
27
Avg time
02:14
Team productivity
Niran P.97% SLA
Kamon R.93% SLA
Ploy S.82% SLA
Thanet V.95% SLA
STEP 01 · SIGN IN

Reviewer signs in, lands on their dashboard.

Five KPIs, three charts, one productivity table. Reviewer sees their day at a glance.

The reviewer signs in with their corporate credentials — SSO on enterprise deployments, local auth on pilot. The dashboard loads with five KPI cards across the top: Total Submitted Today, Pending Review, Approved Today, Rejected Today, Average Review Time. The Pending Review card shows a warning indicator if the queue is backing up beyond a configured threshold; the Average Review Time card shows a warning if the team is slipping the SLA. Below the KPIs: three charts for applications by status, risk level, and submission channel, and a team productivity table with per-reviewer SLA compliance. Reviewers see only their own stats; ops managers and admins see the full team.

STEP 02 · QUEUE

Pick the next application from the queue.

Priority-ordered queue. Live SLA timers. One-click "Pick Next" for the highest-priority case.

The queue tab shows assigned applications ordered by priority — urgent first, then high, then normal — with a live SLA countdown on each row. A small pulsing indicator flags urgent items; the row border turns warning-red when the countdown drops under thirty minutes, and the row gains a breached-SLA warning tint if the timer reaches zero. Filters across the top cover risk level, priority, channel, and search by application number or applicant name. A "Pick Next" button in the page header auto-assigns the highest-priority unassigned case. Ops managers and admins see a Bulk Assign button that reviewers do not.

My Assigned (14) Unassigned (38) All Pending (312)
URGENT
APP-10284 · Somchai K.
Web · Critical risk · Watchlist flag
00:08:12
HIGH
APP-10301 · Sarah K.
Mobile · High risk · OCR 81%
00:47:30
NORMAL
APP-10318 · John D.
API · Normal risk · All checks pass
01:42:05
Filters
Pick Next
APP-10284 CRITICAL RISK Review 02:14
! Watchlist match — identity theft category View entry
ID FRONT · ID BACK · SELFIE · ADDRESS
AI Confidence
94.7%
Face Match
96.3%
OCR
98.1%
Reject
Re-upload
Escalate
Approve
STEP 03 · REVIEW

Open the split-screen review.

The hero screen. Document on the left, extracted fields on the right, face comparison collapsible below, action bar pinned.

Clicking a row opens the review console. The layout is designed for a 1920-wide monitor; below that it reflows with the face-comparison panel collapsible, and on 1440-wide laptops the right panel scrolls independently. The top bar shows the application number in monospace, the current status badge, the risk-level badge, the priority indicator, the submission channel, and a live timer counting up from the moment the reviewer opened the case — distinct from the SLA countdown. Up to four alert banners stack below the top bar in order of severity: watchlist, critical risk, previously-rejected applicant, expired ID.

STEP 04 · EXAMINE

Examine the evidence.

Zoom, rotate, full-screen. Low-quality documents are flagged on their tab.

The left panel of the split screen is the document viewer. Tabs across the top switch between ID Front, ID Back, Selfie, and Proof of Address. Missing documents are shown as disabled tabs with a tooltip explaining the omission. Images that fail to load show an inline error state with a reload action rather than a broken thumbnail. The viewer supports zoom (scroll wheel and buttons, 50%–400%), rotate (90° / 180° / 270°), fit-to-width, fit-to-height, and full-screen mode. If an uploaded document is flagged as low-resolution by the capture pipeline, a warning overlay appears on the image — the reviewer sees the quality warning before they make a decision, not after.

ID Front
ID Back
Selfie
Address —
100%
+
Extracted fields · Thai national ID
Full name Somchai Jaidee 98%
Date of birth 1988-07-12 (พ.ศ. 2531) 97%
ID number 1-2345-6789-01-2 99%
Address 123 ซ.สุขุมวิท 23, วัฒนา, กรุงเทพฯ 84%
Expiry 2030-07-11 99%
Gender Male 99%
STEP 05 · VERIFY

Verify the extracted fields.

Six fields per Thai ID, confidence score on each, one-click verify, inline edit for corrections.

The right panel shows the extracted-field table. For a Thai national ID, the six fields are full name, date of birth, ID number, address, expiry date, and gender. Each row carries the field name, the extracted value (OCR output), the per-field confidence score (success above 95%, warning 80–94%, danger below 80%), an inline-editable "manually corrected value" cell, and a per-row verification checkbox. If the OCR got a character wrong, the reviewer edits the value inline — the original OCR value is preserved as a separate column, and the correction creates an entry in the review history reading "Field 'full_name' corrected from 'Somchai Jaide' to 'Somchai Jaidee'."

STEP 06 · DECIDE

Decide.

Five decisions. Each triggers a modal with the required justification for the audit trail.

The action bar is pinned to the bottom of the screen. Five buttons, each with a single-key shortcut. Approve (A) opens a confirmation modal with the application number and applicant name; the decision is recorded with review duration, AI model version, and the verified-fields snapshot. Reject (R) opens a modal that requires a rejection reason code from 15 predefined reasons and a notes field of at least 20 characters — no free-text-only rejections. Request Re-upload (U) opens a modal where the reviewer selects which documents to request. Escalate (E) opens a modal with a dropdown of eligible senior reviewers. Skip (S) returns the case to the pool; the skip event is logged.

Reject application · APP-10284
Face mismatch (FM-02) ▾
Selfie face geometry does not align with the ID photo at zoom parity. Referring to senior reviewer.
R Reject
Cancel
Confirm reject
History · APP-10284 Export CSV · PDF
Approved · reason snapshot stored 10:42:10
Field 'address' corrected 10:42:08
Officer opened selfie tab 10:42:05
AI extracted fields · v2.4.1 10:42:03
Submitted via Web 10:42:01
Append-only · UPDATE / DELETE rejected at DB
STEP 07 · AUDIT

The audit entry is already written.

Every action becomes a timeline entry. Regulator-facing CSV export available on any date range.

The moment the reviewer clicks approve — or reject, or any of the five decisions — the audit entry is committed. A history drawer is toggled from the top-right corner of the review screen and shows the timeline newest-first: submission event, auto-assignment event, review start, each field correction, each document tab opened, the final decision with reason and notes, and any downstream QA event. Each entry records timestamp, acting user, IP address, user agent, AI model version, and (for field corrections) old and new values. The activity log is append-only; nothing is mutable once written.

05 / KEY FEATURES

Ten details that make this work on a real compliance floor.

01

Split-screen review console

Document on the left (ID front, ID back, selfie, proof of address), extracted fields on the right, face comparison collapsible below, action bar pinned at the bottom. The reviewer's eyes never leave the evidence. Designed for 1920-wide monitors; reflows on 1440 laptops.

02

Live SLA countdowns in the queue

Every pending application shows a live countdown derived from submission time and the priority-specific SLA target (default: urgent 30 min, high 60 min, normal 120 min — configurable). Countdowns turn warning-amber below 30% remaining and breach-red at zero. No static timestamps for the reviewer to subtract in their head.

03

Thai national ID card parsing

13-digit ID format validated at submission, Buddhist-calendar date auto-converted, Thai-script address parsed to structured sub-components, anti-tamper indicators from the card back surfaced to the reviewer. Passport MRZ parsing for Thai, Myanmar, Lao, Cambodian, and Vietnamese passports. Driving-licence formats covered for Thai issuances.

04

Per-field confidence scores

Every extracted field carries its own OCR confidence score, rendered as a colour-coded badge next to the value (success above 95%, warning 80–94%, danger below 80%). Low-confidence fields are flagged in the table; the reviewer can focus their verification on the values the pipeline is uncertain about.

05

AI confidence, face-match, and OCR scores

Three headline scores for the reviewer — AI confidence (overall assessment), face-match (ID photo vs selfie similarity), OCR confidence (text-extraction reliability) — each with a severity indicator and a tooltip explaining the score's meaning. Reviewers escalate low-score-combination cases without needing to recalculate anything by hand.

06

Reason-code-driven rejections

Rejections require a code from a predefined list (15 codes at MVP, editable by admin). No free-text-only rejections. Rejection notes are required, minimum 20 characters. Reason codes are grouped into four categories — document quality, data mismatch, fraud suspicion, policy violation — for rejection-analysis reporting.

07

Watchlist with auto-flagging

Watchlist entries carry a risk category (fraud, identity theft, document forgery, sanctioned, PEP), a source field (internal database, immigration bureau, AMLO sanctions, PEP screening provider), and an active toggle. New applications are auto-matched against active entries at submission; matches promote the case to critical risk and raise a danger banner on the review screen.

08

QA sampling and overturn tracking

QA analysts click "Pick Random Decision" and get a random recent decision that has not yet been QA-reviewed. They confirm or overturn. Overturns update the applicant-facing status and log the delta against the original reviewer's accuracy score. Overturn rate per reviewer becomes a row in the performance table.

09

Role-based access for five roles

Admin, operations manager, senior reviewer, reviewer, QA analyst. Permissions enforced at the middleware layer and at every Blade template. Reviewers only see their own queue and their own stats; senior reviewers handle escalated cases; QA analysts cannot approve or reject, only confirm or overturn.

10

Append-only audit trail

Every action is a row in review_histories — user ID, timestamp, action type, old values, new values, IP address, user agent. Nothing is mutable. A PostgreSQL trigger rejects UPDATE and DELETE statements at the database layer. Retention configurable from 7 days to 10 years.

06 / THE OUTCOME

Sixty percent less review time per case. A hundred percent audit trail.

A compliance team of ten reviewers handling 800 applications a day moves from an industry-typical 4–6 minutes per review (most of it in tab-switching) to a demonstrable 90–150 seconds per review — and walks out of every decision with a structured audit entry rather than a one-line free-text note. That is the headline. The detail is in what the team's month looks like afterwards.

Metric Before VerifyOne After VerifyOne (8-week pilot)
Average review time per case4–6 min90–150 sec
Tab-switches per review6–120–1
Rejections with reason code20–40% (free-text majority)100%
Audit-trail fields captured per decision1–315+
SLA-breach rate (urgent cases)15–30%under 5%
QA overturn rate8–12%3–5% (post-reason-code rollout)
First-application regulator-review passVariableDesigned to meet BOT and AML/CTF review schemas

Pre-pilot figures are industry benchmarks informed by conversations with three Thai compliance-ops leads in Q1 2026 and our founder's prior enterprise delivery — identity platforms processing 3M+ identity attributes across Southeast Asia. Post-pilot figures are the KPI targets we write into the pilot contract, not outcomes from delivered Inline One client pilots (Inline One was founded March 2026; no client pilot has completed as of this page's publication date).

In operational terms, a ten-reviewer floor absorbs the same 800 applications a day with three to four reviewers freed back into other work, or processes double the volume with the same headcount. In compliance terms, the audit trail moves from "defensible on a good day" to "the regulator's questions are already answered by the export." In regulatory-review terms, the rejection-reason-code discipline — enforced at the UI — is the single change with the largest impact on overturn rate and on external-review findings.

07 / FIT

Five teams that feel this daily. And three that probably don't.

This is for

Bank and digital-bank compliance ops reviewing retail onboarding at volume.Split-screen review, BOT-schema audit trail, Thai national ID native handling.
Consumer lenders and BNPL operators running KYC at the point of loan approval.Pairs directly with BNPL Loan Manager; the onboarding screen a reviewer sees maps straight to the loan-decisioning screen downstream.
Telcos running eKYC for prepaid SIM registration, postpaid activation, and number porting.Thai NBTC-style registration flows covered; high throughput per reviewer.
Government services running citizen-facing onboarding for digital services.On-premises deployment, Thai data residency, structured audit trail for agency-level review.
Lending compliance functions at Thai mid-market enterprises.Same review console, applied to business-identity documents rather than retail ones.

This probably isn't for

Teams looking for a consumer-facing KYC capture SDK.VerifyOne is the compliance review console, not the customer onboarding app. The capture SDK feeds evidence into VerifyOne; it is a different product. Integration with third-party capture SDKs is in scope; building the capture SDK is not.
Single-person compliance functions at very small SMEs.The role-based-access model and QA layer assume a team of 3+. A solo reviewer at a small firm gets most of the value but pays for capability they will not use.
Organisations requiring biometric liveness capture on the reviewer side.VerifyOne displays the liveness result from the capture provider; it does not run a liveness check during review. Liveness belongs in the customer app, not in the reviewer's console.
08 / TECHNICAL

What IT and compliance procurement need to know in two screens.

StackLaravel 13 + Livewire 4 (+ Volt) + Preline UI 4 + Tailwind CSS 3 + Alpine.js + PostgreSQL 18
AuthLaravel Breeze on pilot; SAML 2.0 / OIDC SSO on enterprise deployments
Role modelAdmin, operations manager, senior reviewer, reviewer, QA analyst (extensible)
Supported ID typesThai national ID (front/back, 13-digit format), Thai passport (MRZ), Myanmar / Lao / Cambodian / Vietnamese passports (MRZ), Thai driving licence (car / motorcycle / temporary)
OCR and extractionUpstream of VerifyOne — provider-agnostic (Gemini, NIPA Cloud AI, AWS Textract, self-hosted Tesseract, or a client-provided engine). VerifyOne consumes extracted fields plus per-field confidence and displays them.
Face match and livenessConsumed from upstream capture provider (AWS Rekognition, Paravision, iProov, FaceTec, or a client's own engine). VerifyOne renders the score and the supporting ID-photo / selfie pair.
Throughput per reviewer40–60 cases per hour at steady state; bursts of 80+ sustainable
Concurrent reviewers per instance50+ on a 4 vCPU / 8 GB RAM server with PostgreSQL on the same host
DeploymentCloud (Thai-hosted VPS), on-premises (Docker Compose), or air-gapped (public-sector)
Data residencyThai data residency available via Thai-hosted deployment or on-premises at the client's data centre
Audit retentionConfigurable 7 days to 10 years; default 7 years to align with BOT records-retention guidance
ExportsCSV and PDF for activity log, application register, reviewer performance, rejection analysis, daily operations
Integrations (roadmap)Webhook-out on every decision; REST API for queue ingestion; AMLO sanctions list refresh; PEP screening plug-ins

VerifyOne is built to pass a Thai bank's procurement review without custom exceptions and to satisfy the BOT's onboarding guidance, the Thai PDPA, and standard AML/CTF workflow expectations. It is designed to meet these standards — formal certification of any specific deployment is a deployment-level exercise.

Thai PDPA

Data minimisation is built in — the database records only the fields required for review, the audit trail, and regulator-facing reporting. Retention windows are configurable and enforced at the database layer. Subject-access requests export as a single report; right-to-erasure is handled by a soft-delete plus a scheduled hard-delete after the regulatory retention window, with the derived audit metadata retained because the regulator requires the trail.

BOT onboarding guidance

The review console was designed with explicit reference to the Bank of Thailand's digital-onboarding and KYC guidance: structured decision capture, reviewer identification, timestamp on every step, reason-coded rejections, escalation routing. None of it is bolted on after the fact; it is the default workflow.

AML/CTF workflow hooks

Watchlist integration covers the standard categories (fraud, identity theft, document forgery, sanctioned parties, politically-exposed persons). Every critical-risk case surfaces the matched watchlist entry directly on the review screen. Sanctioned-list refresh is a scheduled task; the source field on each watchlist entry is preserved in the audit trail so reviewers can prove which list version was active at the time of decision.

Role-based access control

Five default roles with a full permission matrix enforced at Laravel middleware, controller, Livewire component, and Blade template layers. Extensible with custom roles per deployment.

Encryption

TLS 1.3 in transit; AES-256 at rest for all document files and database columns containing PII; at-rest encryption keys rotated on a configurable schedule (90 days default). Document files are stored with randomised filenames; the original filename is kept only in the audit trail for forensic reconstruction.

Append-only audit trail

The review_histories table is write-only from the application layer. A PostgreSQL trigger rejects UPDATE and DELETE statements at the database level as an additional safeguard. Exports are themselves logged as events, so "who pulled this report" is a row in the same table.

ISO 27001, SOC 2

The product's architecture is designed to support an ISO 27001 / SOC 2 attestation at the deployment level; the pilot engagement includes guidance on how the client's existing ISO envelope extends to cover a VerifyOne deployment. Inline One does not itself hold these certifications at the product level as of this page's publication date; we say "designed to meet," not "certified."

09 / PILOT ENGAGEMENT

Eight weeks. Fixed price. Walk-away clause.

VerifyOne deploys as a pilot with a written KPI and an honest ending. One compliance team, one site, one real queue. Either the measured outcome clears the KPI and we scale, or it doesn't and we part ways with the pilot report in the client's hands.

WEEK 1–2

Discover

Operational walkthrough with the compliance team. Map the current review process tab-by-tab; identify every external system (document store, watchlist, CRM, case manager) the reviewer currently touches. Define the integration surface with the client's existing capture SDK. Write the KPIs. Data-residency decision made.

WEEK 3–6

Pilot deployment

VerifyOne stood up in the client's environment. User accounts for the five roles, connected to the client's SSO where applicable. Watchlist seeded from the client's existing list. Rejection-reason codes configured from the client's existing taxonomy. Queue ingestion wired to the capture pipeline. Real applications, real reviewers, real decisions. Week 4 includes a one-hour reviewer-training session.

WEEK 7–8

Measurement

Measurement against the written KPIs. Honest gap analysis. Rejection-reason-code coverage, overturn rate, SLA compliance, average review time — all reported against pre-pilot baselines captured in Week 1. Recommendation: scale, continue with adjustments, or walk away. All three endings are valid.

MONTH 3+

Scale (or walk)

If the pilot cleared the KPI: roll out to additional teams, enable SSO, deploy the QA-sampling layer, plug in the webhook-out integrations. Move to the SaaS or enterprise-licence commercial model. If it didn't clear: we hand over the pilot findings, cancel the provisional SaaS subscription, and part ways.

Fixed-price pilot. Walk-away clause. Roadmap influence for design partners. No surprises.

10 / PRICING

Three ways to engage.

Pilot (8 weeks, fixed)

Deploy to one compliance team at one site. All costs fixed in week 0.

  • 8-week managed deployment
  • Up to 3 ID types optimised (Thai national ID is always one)
  • Integration with one upstream capture pipeline
  • Weekly demos, honest KPI report
  • Data-residency choice (Thai-hosted VPS / on-premises / air-gapped)
On request — typically ฿ 650,000 – 1,200,000 depending on integration scope and data-residency constraints.
Request pilot pricing →

SaaS (post-pilot)

Hosted multi-tenant on Thai-hosted infrastructure. Priced per reviewer seat and per decision tier.

  • Per-reviewer seat (active concurrent reviewers)
  • Per-decision tier (monthly decision volume bands)
  • Standard audit retention (7 years; configurable 90 days to 10 years)
  • QA layer included
  • Webhook-out integrations included
From ฿ 28,000 / month (5 reviewer seats, up to 5,000 decisions / month).
See SaaS pricing detail →

Enterprise licence

Annual licence. On-premises or dedicated-tenant deployment. Air-gapped option for public-sector.

  • Unlimited reviewers, unlimited decisions
  • On-premises or dedicated-tenant deployment
  • SSO (SAML 2.0 / OIDC) included
  • Custom rejection-reason taxonomy
  • AMLO sanctions list integration
  • Named engineering contact and quarterly architecture reviews
On request.
Talk to us about enterprise →
11 / QUESTIONS WE GET

Twelve questions compliance and IT ask first.

Is VerifyOne the customer-facing KYC capture app, or the reviewer's console?
The reviewer's console. The customer-facing capture app (document photo, selfie, liveness) is a separate product category — we integrate with any major capture provider (AWS Rekognition + Textract, Paravision, iProov, FaceTec, and most Thai-market KYC SDKs). VerifyOne is where the compliance team reviews what the capture app collected, makes the approval decision, and leaves the audit trail.
Are you certified for BOT, Thai PDPA, ISO 27001, SOC 2, AML/CTF?
We say "designed to meet," not "certified." VerifyOne's architecture, data model, role-based access, audit trail, retention handling, and review workflow were built with explicit reference to Thai PDPA, Bank of Thailand digital-onboarding guidance, and standard AML/CTF workflow expectations. Formal attestation (ISO 27001, SOC 2) is a deployment-level exercise — the pilot engagement includes guidance on extending the client's existing ISO envelope to cover a VerifyOne deployment. Inline One as a product company does not itself hold these certifications at the time of writing, and we would rather say so than blur the line.
How does the audit trail actually work?
Every action on an application — submission, auto-assignment, review start, document tab opened, field correction, verification checkbox toggled, decision, escalation, re-upload request, QA check, field change, status change — writes a row to the append-only review_histories table. Each row captures user ID, timestamp, IP address, user agent, AI model version (where relevant), and old/new values for field-level changes. UPDATE and DELETE on that table are rejected by a PostgreSQL trigger. Exports go out as CSV or PDF; the export itself is logged as an event in the same table.
What's the Thai data-residency story?
Three deployment modes: Thai-hosted VPS (for pilots and mid-market SaaS), on-premises at the client's data centre (for banks, large lenders, regulated enterprises), or air-gapped (for public-sector deployments). In every mode, all VerifyOne processing and storage stays on infrastructure the client controls or has designated. Upstream providers (your chosen OCR or liveness engine) are your procurement decision — VerifyOne integrates with Thai-resident providers (including NIPA Cloud AI) where residency is a hard requirement.
Can it handle the Thai national ID card natively, including the Buddhist calendar?
Yes. 13-digit ID number format validated and checksum-verified at submission, Buddhist-calendar date of birth auto-converted to Gregorian for internal storage and back to Buddhist for display where appropriate, Thai-script address parsed to structured sub-components (house number, soi, sub-district, district, province), front-and-back card handling with MRZ-like logic for the back of the card, anti-tamper indicators from the card back surfaced on the review screen. Passport MRZ is handled for Thai, Myanmar, Lao, Cambodian, and Vietnamese passports.
We already have a KYC platform. Why would we replace the review UI specifically?
In most Thai deployments we've studied, the "KYC platform" is a capture SDK plus a data store plus a stitched-together review UI that the reviewer opens alongside a CRM, a watchlist search tool, and a notes field in the case manager. Replacing the capture SDK is a much bigger project than replacing the review UI. VerifyOne targets the review UI specifically because that is where the reviewer spends their time and where the audit trail is currently weakest.
What integrates into VerifyOne, and what does VerifyOne integrate into?

Into VerifyOne: your capture SDK (queue ingestion), your OCR provider (extracted fields + confidence), your face-match and liveness provider (scores and source images), your watchlist source (sanctions, PEP, internal fraud list), your SSO (SAML 2.0 / OIDC).

Out of VerifyOne: webhooks on every decision event (for your case manager, underwriting system, CRM, notification service), CSV / PDF exports for regulator reporting, REST API for downstream systems that want to poll rather than subscribe.

How does it handle concurrent reviewers on the same application?
A soft-lock heartbeat on the application record. When reviewer A opens APP-10284, a field updates with reviewer A's session ID and a timestamp. If reviewer B opens the same case, the review screen shows a warning banner — "This application is currently being reviewed by Reviewer A. Proceeding may cause a conflict." — and they can choose to continue in read-only mode or back out. The lock expires after a configurable idle period (default 15 minutes).
How accurate is the OCR / face-match? Is VerifyOne responsible for those numbers?
No. VerifyOne is provider-agnostic: OCR and face-match run in your chosen upstream provider, and VerifyOne renders the scores they produce. The accuracy numbers belong to the capture provider, not to VerifyOne. What VerifyOne is responsible for is surfacing those scores honestly — per-field confidence badges, aggregate confidence rings, low-confidence flags — so the reviewer is never surprised by a score they did not see at the time of decision.
What about multilingual support?
Thai and English at MVP. The UI supports both; reviewers can switch their interface language per-user. The document-OCR language support depends on the upstream OCR provider. Myanmar, Lao, Khmer, and Vietnamese passport MRZ is parsed natively; full Myanmar-script and Lao-script extraction is on the roadmap based on our founder's prior OCR work in those scripts.
Can our QA team sample automatically instead of picking random decisions manually?
Yes. The QA sampling algorithm is configurable: random across all decisions, weighted toward rejections (default — rejections are sampled at a higher rate), weighted toward low-confidence cases, or rule-based (e.g., every nth decision by a specific reviewer, or every decision over a specified risk threshold). QA analysts can still pick random manually; the automatic sampling fills an untouched QA queue in the background.
What does the pilot contract actually look like?
One page of commercial terms (fixed price, payment schedule, walk-away clause), one page of KPI definitions (average review time, SLA compliance rate, reason-code coverage, overturn rate), one page of data-processing terms (Thai PDPA compliant processor-clauses, data-residency commitment, retention and deletion terms), and a one-page scope document naming the compliance team, the site, the ID types, and the upstream capture provider. Total: four pages, signed in one round.
13 / START

Open one real application in the review console. See where your compliance team's afternoon goes.

The demo takes thirty minutes. We walk your senior reviewer through a real Thai-ID review, a reason-code rejection, a QA overturn, and an audit-log export — using either our seeded demo data or a sanitised sample of your own. If the split-screen console is obviously the right console for your team, you'll know it in the first ten minutes. If it isn't, we'll tell you what's missing, honestly.

Book a demo walkthrough Talk to our compliance lead paing@inlineone.com
14 / WHERE THIS CAME FROM

Three million identity attributes. One product, built from what that teaches you.

VerifyOne didn't start from scratch. The review-console model, the Thai national ID handling, the audit-trail schema, and the reviewer-QA separation are all built on our founder's prior enterprise delivery — identity platforms processing 3M+ identity attributes across Southeast Asia (2017–2026), including Myanmar national ID extraction, face-recognition integration for onboarding, and compliance review consoles for regulated lenders and government services.

Three million attributes is how you learn which fields the reviewer actually looks at, where the free-text notes become a regulator-facing liability, how the overturn rate moves when rejection-reason codes are enforced at the UI, and why the audit trail needs to be append-only at the database layer, not just the application layer. Those are the decisions encoded in this product.

AApprove
RReject
URequest re-upload
EEscalate
SSkip
HToggle history
FToggle face panel
15Document tabs
EscClose modal
Read the full track record →
Legal & compliance footnotes
  1. All prior-delivery figures cited on this page are from our founder's prior enterprise work between 2017 and 2026, prior to founding Inline One Systems in March 2026. They are not Inline One customer outcomes.
  2. The "3M+ identity attributes" figure refers to aggregate identity attributes processed through identity platforms our founder previously shipped at prior roles. It is not an Inline One metric.
  3. "Pilot-ready" means VerifyOne has been built, seeded with 500+ realistic applications across every workflow state, and deployed in staging with working end-to-end flows. No Inline One client pilot has completed as of this page's publication date.
  4. "Designed to meet" Thai PDPA, BOT digital-onboarding guidance, AML/CTF workflow expectations, ISO 27001, and SOC 2 is a statement about product architecture. It is not a claim of certification. Formal attestation of any specific deployment is a deployment-level exercise scoped during the pilot engagement.
  5. Thai national ID, passport, and driving-licence format handling reflects formats in effect as of Q1 2026. Format changes by Thai issuing authorities are handled as part of the standard maintenance schedule.
  6. OCR, face-match, and liveness accuracy figures are the responsibility of the upstream capture and extraction provider selected by the client, not of VerifyOne. VerifyOne renders the scores those providers produce.