Kutup NQ

How The Runtime Works

Open NQ App Production URL

Purpose

Conversation UI, evidence-first backend

The NQ interface is chat-like, but the runtime is not a generic chatbot. It is a staged technical workflow designed to keep answers inspectable, bounded by evidence, and stable enough for engineering and operational use.

Primary mode
Dataset-first retrieval
Language policy
Prompt language in, answer language out
Failure behavior
Local evidence fallback instead of silent failure

Stage 1

Intent And Routing

The browser checks whether the prompt is in scope, normalizes the query, detects supported language, extracts useful terms, and may add frontend ML signals such as embeddings and intent confidence. That keeps the backend payload small and targeted.

  • English and Turkish are treated as first-class inputs.
  • Frontend ML assists ranking when available, but the system still works without it.
  • Unsupported prompts are rejected before retrieval to avoid wasteful runtime work.

Stage 2

Execution Flow

01

Query preparation

The browser builds a compact request with normalized text, intent hints, language context, and optional frontend ranking signals.

02

Dataset retrieval

The backend searches versioned local datasets first. Turkish and English sources can both participate, but the scoring still favors the user’s language for relevance and readability.

03

Context enrichment

If the query is recency-sensitive, weakly matched, or explicitly asks for broader context, NQ can add Wikipedia and a curated industrial RSS layer.

  • Wikipedia fills neutral background gaps.
  • RSS is limited to approved industrial, manufacturing, IIoT, robotics, and sector-process sources.
04

Evidence packet assembly

The strongest dataset, Wikipedia, and RSS excerpts are compacted into a single reference packet. The answer layer only sees that packet, not unrestricted web context.

05

Draft and verification

Qwen drafts the answer from the supplied references. Step verifies or tightens it, removing unsupported claims when needed. If the remote synthesis path is unavailable, NQ returns a local evidence summary instead.

06

Inspectable delivery

The final response reaches the thread with runtime metadata still attached, so the user can inspect evidence, used tools, rate limits, and fallback status.

Stage 3

Evidence Sources

Local datasets

Versioned JSON datasets remain the primary knowledge base for industrial automation, protocols, process control, SQL, and machine learning fundamentals.

Wikipedia

Used as neutral background context when the local datasets mention a concept but do not fully unpack it.

Approved RSS feeds

The RSS layer is now restricted to approved operational sources such as Automation World, ISA Interchange, MachineMetrics, Robohub, The Robot Report, Universal Robots, Packaging Strategies, Food Engineering, and selected manufacturing blogs.

Fallback summaries

If the remote LLM path is unavailable, the system formats a local response from the retrieved evidence instead of returning an empty failure.

RSS discovery hubs and RSS index pages are not used directly as feeds. Only concrete feed endpoints are allowed into the runtime list.

Stage 4

Runtime Visibility

  • The thread can expose which tool classes were involved: datasets, Wikipedia, RSS, and the answer layer.
  • Model status is inspectable, including verification outcome and fallback use.
  • Evidence cards remain tied to the answer instead of becoming separate dashboard widgets.
  • Rate-limit information is visible when the backend or browser quota becomes relevant.

Stage 5

Safeguards

  1. Requests are validated and rate-limited before retrieval or synthesis begins.
  2. The answer layer is instructed to stay inside the provided references.
  3. Remote synthesis failure triggers a local fallback message instead of a dead end.
  4. The UI keeps runtime status subordinate but available, which makes failures diagnosable without cluttering the main thread.