VerifyOne is the compliance review console Thai banks, lenders, telcos, and government services should already have. Reviewers see the ID scan, selfie, liveness check, and extracted fields on one screen — with AI confidence on every check and a defensible audit trail on every decision. Designed to meet Thai PDPA, BOT-regulated onboarding guidance, and AML/CTF workflow requirements.
Every bank, consumer lender, telco and government service in Thailand has a compliance team that reviews onboarding applications. The work is not glamorous. A senior analyst opens an application, switches to a second tab to see the ID scan, switches to a third tab to see the liveness capture, pastes the ID number into the watchlist tool, writes the decision in a free-text field, and moves to the next file. They do this several hundred times a day. One of those decisions will be audited by the regulator six months from now, and the trail needs to hold.
A Tuesday afternoon at a mid-sized consumer-finance lender in Bangkok. Queue depth: 312 pending applications. Team on shift: four reviewers, one senior reviewer, one QA analyst. Target SLA: 30 minutes per urgent case, two hours per standard. The team is running eight minutes behind on the urgent queue.
Application APP-10284 arrives. The reviewer opens it, clicks out to the document-viewer tab to see the front of the Thai national ID card, squints at the address line, switches tabs to the internal watchlist search, types in the ID number, waits, sees no match, switches back, opens the selfie in a new tab, opens the liveness video in yet another tab, decides the face match is fine, types "approved — all checks ok" into the notes field, and clicks approve. Ninety seconds, four tabs, one line of free text, no record of what the reviewer actually looked at. Multiply by 300 cases a day.
When the quarterly regulator review comes, the lender cannot reconstruct what the reviewer saw at the moment of approval — only the final decision. Any reviewer who rejected a case with the phrase "looked dodgy" (it happens) has just given the auditor an opening.
Most lenders stitch together a KYC capture SDK, a CRM, a document store, and an in-house spreadsheet. The reviewer sees none of it on one screen. They tab-switch hundreds of times a day, and the cognitive load goes into navigating tools rather than evaluating evidence.
Regulators increasingly want to know exactly what a reviewer saw, what the AI flagged, which fields were verified, and what override reasons were recorded. A notes field captures none of that. When the regulator asks "why was this approved?" the answer has to be reconstructed from Slack messages and memory.
Most global eKYC platforms treat the Thai national ID card as one more document template. The 13-digit format, the Buddhist-calendar date of birth, the Thai-script address line, and the specific anti-tamper markings on the card are treated as localisation edge cases. Thai regulators do not.
"My reviewers spend roughly half their shift in the tools that are supposed to help them. Not on the decision itself. The decision is fast — finding what they need to make it isn't."
Conversation with a head of compliance ops at a Thai consumer lender · February 2026
VerifyOne is built around a single observation: the reviewer already knows how to make the decision. What slows them down is not judgement — it is the three tabs, the two logins, and the free-text notes field that pretend to be an audit trail. VerifyOne collapses the evidence, the AI scores, the extracted fields, and the decision actions into one console, and captures every click against the regulator's schema automatically.
There is a queue with live SLA countdowns. There is a split-screen review panel with documents on the left and a structured decision panel on the right. There is a rejection modal that forces a reason code. There is a QA layer on top that samples decisions and tracks reviewer accuracy. There is an activity log that records every action ever taken, with model version, AI provider, and timestamp. None of that is optional.
The split-screen review panel is the hero screen of the product. ID front, ID back, selfie, liveness, proof of address — all on the left, with zoom, rotate, and full-screen controls. Extracted fields, confidence scores, face-match score, and the decision action bar on the right. The reviewer's eyes never leave the evidence while they make the call.
Every action — reading the document, correcting a field, approving, rejecting with reason, escalating, requesting re-upload — is a row in the review_histories table. Each row carries the user ID, the timestamp, the IP address, the AI model version, and the before/after values. Regulator asks "what did the reviewer see when they approved APP-10284?" — the answer is already indexed.
Thai national ID card parsing (front and back, 13-digit format, Buddhist-calendar date conversion, Thai-script address line). Watchlist integration with standard categories (fraud, identity theft, document forgery, sanctioned, PEP). Role-based access for the five roles a Thai compliance team actually runs: admin, operations manager, senior reviewer, reviewer, QA analyst. Thai data residency available via on-premises deployment or Thai-hosted VPS.
This isn't a KYC capture SDK. It's the review console that the capture SDK hands evidence to.
Five KPIs, three charts, one productivity table. Reviewer sees their day at a glance.
The reviewer signs in with their corporate credentials — SSO on enterprise deployments, local auth on pilot. The dashboard loads with five KPI cards across the top: Total Submitted Today, Pending Review, Approved Today, Rejected Today, Average Review Time. The Pending Review card shows a warning indicator if the queue is backing up beyond a configured threshold; the Average Review Time card shows a warning if the team is slipping the SLA. Below the KPIs: three charts for applications by status, risk level, and submission channel, and a team productivity table with per-reviewer SLA compliance. Reviewers see only their own stats; ops managers and admins see the full team.
Priority-ordered queue. Live SLA timers. One-click "Pick Next" for the highest-priority case.
The queue tab shows assigned applications ordered by priority — urgent first, then high, then normal — with a live SLA countdown on each row. A small pulsing indicator flags urgent items; the row border turns warning-red when the countdown drops under thirty minutes, and the row gains a breached-SLA warning tint if the timer reaches zero. Filters across the top cover risk level, priority, channel, and search by application number or applicant name. A "Pick Next" button in the page header auto-assigns the highest-priority unassigned case. Ops managers and admins see a Bulk Assign button that reviewers do not.
The hero screen. Document on the left, extracted fields on the right, face comparison collapsible below, action bar pinned.
Clicking a row opens the review console. The layout is designed for a 1920-wide monitor; below that it reflows with the face-comparison panel collapsible, and on 1440-wide laptops the right panel scrolls independently. The top bar shows the application number in monospace, the current status badge, the risk-level badge, the priority indicator, the submission channel, and a live timer counting up from the moment the reviewer opened the case — distinct from the SLA countdown. Up to four alert banners stack below the top bar in order of severity: watchlist, critical risk, previously-rejected applicant, expired ID.
Zoom, rotate, full-screen. Low-quality documents are flagged on their tab.
The left panel of the split screen is the document viewer. Tabs across the top switch between ID Front, ID Back, Selfie, and Proof of Address. Missing documents are shown as disabled tabs with a tooltip explaining the omission. Images that fail to load show an inline error state with a reload action rather than a broken thumbnail. The viewer supports zoom (scroll wheel and buttons, 50%–400%), rotate (90° / 180° / 270°), fit-to-width, fit-to-height, and full-screen mode. If an uploaded document is flagged as low-resolution by the capture pipeline, a warning overlay appears on the image — the reviewer sees the quality warning before they make a decision, not after.
Six fields per Thai ID, confidence score on each, one-click verify, inline edit for corrections.
The right panel shows the extracted-field table. For a Thai national ID, the six fields are full name, date of birth, ID number, address, expiry date, and gender. Each row carries the field name, the extracted value (OCR output), the per-field confidence score (success above 95%, warning 80–94%, danger below 80%), an inline-editable "manually corrected value" cell, and a per-row verification checkbox. If the OCR got a character wrong, the reviewer edits the value inline — the original OCR value is preserved as a separate column, and the correction creates an entry in the review history reading "Field 'full_name' corrected from 'Somchai Jaide' to 'Somchai Jaidee'."
Five decisions. Each triggers a modal with the required justification for the audit trail.
The action bar is pinned to the bottom of the screen. Five buttons, each with a single-key shortcut. Approve (A) opens a confirmation modal with the application number and applicant name; the decision is recorded with review duration, AI model version, and the verified-fields snapshot. Reject (R) opens a modal that requires a rejection reason code from 15 predefined reasons and a notes field of at least 20 characters — no free-text-only rejections. Request Re-upload (U) opens a modal where the reviewer selects which documents to request. Escalate (E) opens a modal with a dropdown of eligible senior reviewers. Skip (S) returns the case to the pool; the skip event is logged.
Every action becomes a timeline entry. Regulator-facing CSV export available on any date range.
The moment the reviewer clicks approve — or reject, or any of the five decisions — the audit entry is committed. A history drawer is toggled from the top-right corner of the review screen and shows the timeline newest-first: submission event, auto-assignment event, review start, each field correction, each document tab opened, the final decision with reason and notes, and any downstream QA event. Each entry records timestamp, acting user, IP address, user agent, AI model version, and (for field corrections) old and new values. The activity log is append-only; nothing is mutable once written.
Document on the left (ID front, ID back, selfie, proof of address), extracted fields on the right, face comparison collapsible below, action bar pinned at the bottom. The reviewer's eyes never leave the evidence. Designed for 1920-wide monitors; reflows on 1440 laptops.
Every pending application shows a live countdown derived from submission time and the priority-specific SLA target (default: urgent 30 min, high 60 min, normal 120 min — configurable). Countdowns turn warning-amber below 30% remaining and breach-red at zero. No static timestamps for the reviewer to subtract in their head.
13-digit ID format validated at submission, Buddhist-calendar date auto-converted, Thai-script address parsed to structured sub-components, anti-tamper indicators from the card back surfaced to the reviewer. Passport MRZ parsing for Thai, Myanmar, Lao, Cambodian, and Vietnamese passports. Driving-licence formats covered for Thai issuances.
Every extracted field carries its own OCR confidence score, rendered as a colour-coded badge next to the value (success above 95%, warning 80–94%, danger below 80%). Low-confidence fields are flagged in the table; the reviewer can focus their verification on the values the pipeline is uncertain about.
Three headline scores for the reviewer — AI confidence (overall assessment), face-match (ID photo vs selfie similarity), OCR confidence (text-extraction reliability) — each with a severity indicator and a tooltip explaining the score's meaning. Reviewers escalate low-score-combination cases without needing to recalculate anything by hand.
Rejections require a code from a predefined list (15 codes at MVP, editable by admin). No free-text-only rejections. Rejection notes are required, minimum 20 characters. Reason codes are grouped into four categories — document quality, data mismatch, fraud suspicion, policy violation — for rejection-analysis reporting.
Watchlist entries carry a risk category (fraud, identity theft, document forgery, sanctioned, PEP), a source field (internal database, immigration bureau, AMLO sanctions, PEP screening provider), and an active toggle. New applications are auto-matched against active entries at submission; matches promote the case to critical risk and raise a danger banner on the review screen.
QA analysts click "Pick Random Decision" and get a random recent decision that has not yet been QA-reviewed. They confirm or overturn. Overturns update the applicant-facing status and log the delta against the original reviewer's accuracy score. Overturn rate per reviewer becomes a row in the performance table.
Admin, operations manager, senior reviewer, reviewer, QA analyst. Permissions enforced at the middleware layer and at every Blade template. Reviewers only see their own queue and their own stats; senior reviewers handle escalated cases; QA analysts cannot approve or reject, only confirm or overturn.
Every action is a row in review_histories — user ID, timestamp, action type, old values, new values, IP address, user agent. Nothing is mutable. A PostgreSQL trigger rejects UPDATE and DELETE statements at the database layer. Retention configurable from 7 days to 10 years.
A compliance team of ten reviewers handling 800 applications a day moves from an industry-typical 4–6 minutes per review (most of it in tab-switching) to a demonstrable 90–150 seconds per review — and walks out of every decision with a structured audit entry rather than a one-line free-text note. That is the headline. The detail is in what the team's month looks like afterwards.
| Metric | Before VerifyOne | After VerifyOne (8-week pilot) |
|---|---|---|
| Average review time per case | 4–6 min | 90–150 sec |
| Tab-switches per review | 6–12 | 0–1 |
| Rejections with reason code | 20–40% (free-text majority) | 100% |
| Audit-trail fields captured per decision | 1–3 | 15+ |
| SLA-breach rate (urgent cases) | 15–30% | under 5% |
| QA overturn rate | 8–12% | 3–5% (post-reason-code rollout) |
| First-application regulator-review pass | Variable | Designed to meet BOT and AML/CTF review schemas |
Pre-pilot figures are industry benchmarks informed by conversations with three Thai compliance-ops leads in Q1 2026 and our founder's prior enterprise delivery — identity platforms processing 3M+ identity attributes across Southeast Asia. Post-pilot figures are the KPI targets we write into the pilot contract, not outcomes from delivered Inline One client pilots (Inline One was founded March 2026; no client pilot has completed as of this page's publication date).
In operational terms, a ten-reviewer floor absorbs the same 800 applications a day with three to four reviewers freed back into other work, or processes double the volume with the same headcount. In compliance terms, the audit trail moves from "defensible on a good day" to "the regulator's questions are already answered by the export." In regulatory-review terms, the rejection-reason-code discipline — enforced at the UI — is the single change with the largest impact on overturn rate and on external-review findings.
| Stack | Laravel 13 + Livewire 4 (+ Volt) + Preline UI 4 + Tailwind CSS 3 + Alpine.js + PostgreSQL 18 |
|---|---|
| Auth | Laravel Breeze on pilot; SAML 2.0 / OIDC SSO on enterprise deployments |
| Role model | Admin, operations manager, senior reviewer, reviewer, QA analyst (extensible) |
| Supported ID types | Thai national ID (front/back, 13-digit format), Thai passport (MRZ), Myanmar / Lao / Cambodian / Vietnamese passports (MRZ), Thai driving licence (car / motorcycle / temporary) |
| OCR and extraction | Upstream of VerifyOne — provider-agnostic (Gemini, NIPA Cloud AI, AWS Textract, self-hosted Tesseract, or a client-provided engine). VerifyOne consumes extracted fields plus per-field confidence and displays them. |
| Face match and liveness | Consumed from upstream capture provider (AWS Rekognition, Paravision, iProov, FaceTec, or a client's own engine). VerifyOne renders the score and the supporting ID-photo / selfie pair. |
| Throughput per reviewer | 40–60 cases per hour at steady state; bursts of 80+ sustainable |
| Concurrent reviewers per instance | 50+ on a 4 vCPU / 8 GB RAM server with PostgreSQL on the same host |
| Deployment | Cloud (Thai-hosted VPS), on-premises (Docker Compose), or air-gapped (public-sector) |
| Data residency | Thai data residency available via Thai-hosted deployment or on-premises at the client's data centre |
| Audit retention | Configurable 7 days to 10 years; default 7 years to align with BOT records-retention guidance |
| Exports | CSV and PDF for activity log, application register, reviewer performance, rejection analysis, daily operations |
| Integrations (roadmap) | Webhook-out on every decision; REST API for queue ingestion; AMLO sanctions list refresh; PEP screening plug-ins |
VerifyOne is built to pass a Thai bank's procurement review without custom exceptions and to satisfy the BOT's onboarding guidance, the Thai PDPA, and standard AML/CTF workflow expectations. It is designed to meet these standards — formal certification of any specific deployment is a deployment-level exercise.
Data minimisation is built in — the database records only the fields required for review, the audit trail, and regulator-facing reporting. Retention windows are configurable and enforced at the database layer. Subject-access requests export as a single report; right-to-erasure is handled by a soft-delete plus a scheduled hard-delete after the regulatory retention window, with the derived audit metadata retained because the regulator requires the trail.
The review console was designed with explicit reference to the Bank of Thailand's digital-onboarding and KYC guidance: structured decision capture, reviewer identification, timestamp on every step, reason-coded rejections, escalation routing. None of it is bolted on after the fact; it is the default workflow.
Watchlist integration covers the standard categories (fraud, identity theft, document forgery, sanctioned parties, politically-exposed persons). Every critical-risk case surfaces the matched watchlist entry directly on the review screen. Sanctioned-list refresh is a scheduled task; the source field on each watchlist entry is preserved in the audit trail so reviewers can prove which list version was active at the time of decision.
Five default roles with a full permission matrix enforced at Laravel middleware, controller, Livewire component, and Blade template layers. Extensible with custom roles per deployment.
TLS 1.3 in transit; AES-256 at rest for all document files and database columns containing PII; at-rest encryption keys rotated on a configurable schedule (90 days default). Document files are stored with randomised filenames; the original filename is kept only in the audit trail for forensic reconstruction.
The review_histories table is write-only from the application layer. A PostgreSQL trigger rejects UPDATE and DELETE statements at the database level as an additional safeguard. Exports are themselves logged as events, so "who pulled this report" is a row in the same table.
The product's architecture is designed to support an ISO 27001 / SOC 2 attestation at the deployment level; the pilot engagement includes guidance on how the client's existing ISO envelope extends to cover a VerifyOne deployment. Inline One does not itself hold these certifications at the product level as of this page's publication date; we say "designed to meet," not "certified."
VerifyOne deploys as a pilot with a written KPI and an honest ending. One compliance team, one site, one real queue. Either the measured outcome clears the KPI and we scale, or it doesn't and we part ways with the pilot report in the client's hands.
Operational walkthrough with the compliance team. Map the current review process tab-by-tab; identify every external system (document store, watchlist, CRM, case manager) the reviewer currently touches. Define the integration surface with the client's existing capture SDK. Write the KPIs. Data-residency decision made.
VerifyOne stood up in the client's environment. User accounts for the five roles, connected to the client's SSO where applicable. Watchlist seeded from the client's existing list. Rejection-reason codes configured from the client's existing taxonomy. Queue ingestion wired to the capture pipeline. Real applications, real reviewers, real decisions. Week 4 includes a one-hour reviewer-training session.
Measurement against the written KPIs. Honest gap analysis. Rejection-reason-code coverage, overturn rate, SLA compliance, average review time — all reported against pre-pilot baselines captured in Week 1. Recommendation: scale, continue with adjustments, or walk away. All three endings are valid.
If the pilot cleared the KPI: roll out to additional teams, enable SSO, deploy the QA-sampling layer, plug in the webhook-out integrations. Move to the SaaS or enterprise-licence commercial model. If it didn't clear: we hand over the pilot findings, cancel the provisional SaaS subscription, and part ways.
Fixed-price pilot. Walk-away clause. Roadmap influence for design partners. No surprises.
Deploy to one compliance team at one site. All costs fixed in week 0.
Hosted multi-tenant on Thai-hosted infrastructure. Priced per reviewer seat and per decision tier.
Annual licence. On-premises or dedicated-tenant deployment. Air-gapped option for public-sector.
review_histories table. Each row captures user ID, timestamp, IP address, user agent, AI model version (where relevant), and old/new values for field-level changes. UPDATE and DELETE on that table are rejected by a PostgreSQL trigger. Exports go out as CSV or PDF; the export itself is logged as an event in the same table.Into VerifyOne: your capture SDK (queue ingestion), your OCR provider (extracted fields + confidence), your face-match and liveness provider (scores and source images), your watchlist source (sanctions, PEP, internal fraud list), your SSO (SAML 2.0 / OIDC).
Out of VerifyOne: webhooks on every decision event (for your case manager, underwriting system, CRM, notification service), CSV / PDF exports for regulator reporting, REST API for downstream systems that want to poll rather than subscribe.
The demo takes thirty minutes. We walk your senior reviewer through a real Thai-ID review, a reason-code rejection, a QA overturn, and an audit-log export — using either our seeded demo data or a sanitised sample of your own. If the split-screen console is obviously the right console for your team, you'll know it in the first ten minutes. If it isn't, we'll tell you what's missing, honestly.
VerifyOne didn't start from scratch. The review-console model, the Thai national ID handling, the audit-trail schema, and the reviewer-QA separation are all built on our founder's prior enterprise delivery — identity platforms processing 3M+ identity attributes across Southeast Asia (2017–2026), including Myanmar national ID extraction, face-recognition integration for onboarding, and compliance review consoles for regulated lenders and government services.
Three million attributes is how you learn which fields the reviewer actually looks at, where the free-text notes become a regulator-facing liability, how the overturn rate moves when rejection-reason codes are enforced at the UI, and why the audit trail needs to be append-only at the database layer, not just the application layer. Those are the decisions encoded in this product.