Skip to main content
Pipeline Setup · <15 Min 🎯Pilot → Production 👍👎Real User Feedback Loop 🤖Tune Assistants & Agents
A user clicks 👎 in your OAC AI chat — now turn it into a fix.

OAC AI Feedback Tuning Pipeline.

Set up in under 15 minutes — graduate Oracle Analytics Cloud (OAC) AI Assistants & Agents to production with confidence.

Phase A · Capture
Capture Real Feedback
A Terraform stack wires OAC → OCI Logging → Object Storage. Every utterance, response and thumbs-up / thumbs-down on AI Assistants & Agents lands as structured JSON.
~5 min · Resource Manager
Phase B · Database
🗄️
Land in Autonomous DB
One Cloud Shell script creates the SODA collection, ingestion procedure, scheduler job and analytics-ready views — zero laptop install.
~5 min · Cloud Shell
Phase C · Tune
📊
Tune to Production
Open the OAC workbook, trace thumbs-down feedback to the generated LSQL, then refine synonyms (Assistants) & supplemental instructions (Agents) — and ship.
~5 min · OAC Dashboard
Overview · 60-second read

Turn every 👎 into a tuning signal — and graduate OAC AI Assistants & Agents from pilot to production.

The problem

Without a feedback loop, AI pilots stall. Authors can't see which answers failed or why. Tuning becomes guesswork. Users stop trusting the Assistant and Agent. Analysts remain the bottleneck.

The fix

A packaged 3-phase pipeline captures every 👍 / 👎 on your OAC AI Assistants & Agents — together with the prompt and the generated Logical SQL — into an OAC workbook. Authors review, tune the right layer (synonyms, instructions, knowledge documents), and re-publish the same day.

📋 What this is

A feedback tuning pipeline for OAC AI

  • Captures every 👍 / 👎 with the prompt + the generated LSQL.
  • Lands it in Autonomous Database on the connector's scheduled cadence.
  • Surfaces it in an OAC workbook the Author opens weekly.
⚠️ Why it’s needed

OAC AI knows your schema — not your business

  • Without a feedback loop, authors can’t see which answers failed or why.
  • Tuning becomes guesswork; users stop trusting the Assistant and Agent.
  • The pilot stalls. Analysts remain the bottleneck.
🎯 Business value

Production-grade AI, tuned weekly

  • Self-service delivered. 10-second answers, not 3-day BI requests.
  • Evidence per release. Every fix traced to a real 👎 and the exact LSQL.
  • Same-day remediation. Author refines, re-publishes, closes the loop.
⚡ The 3 phases

Browser-driven, end-to-end

  • A · Capture. Upload a Terraform ZIP to Resource Manager. ~5 min
  • B · Land. Run one script in Cloud Shell. ~5 min
  • C · Tune. Build the OAC workbook and start the weekly loop. ~5 min
Closed loop → User asks AI answers · 👍 / 👎 Author tunes Back in production
See the 3 steps below
Pipeline Setup · 3 Phases 🎯 Pilot → Production

Set up the feedback tuning pipeline in under 15 minutes.

Three phases, fully browser-driven. Each phase below shows what to do and the exact artifact to download — right where you need it.

1
📦
Phase A · OCI Resource Manager
Provision the capture layer

Upload the Terraform bundle to Resource Manager. OCI provisions the bucket, log group, Service Connector, Dynamic Group, and policies.

  • Upload the packaged ZIP as a new stack.
  • Select the target compartment and paste the OAC + ADB OCIDs.
  • Click Plan → Apply. OCI does the rest.
⚡ Auto-Provisioned
oac-feedback-phase1.zip ~ 20 KB · Terraform + schema.yaml
Download
≈ 5 min · no local Terraform required
Captures
2
☁️
Phase B · OCI Cloud Shell
Install the database pipeline

Run a single shell script in Cloud Shell. It creates the OACFB user, installs the ingestion procedure, arms the scheduler, and builds OAC_FEEDBACK_WITH_LSQL.

  • Upload the installer and the SQL bundle to Cloud Shell.
  • Run the installer — answer 4 prompts (compartment, ADB, wallet, admin password).
  • Installer is idempotent — safe to re-run.
⚙️ DB · Automated
install_oac_feedback_pipeline.sh ~ 6 KB · bash · idempotent
.sh
oac-feedback-pipeline-sql.zip ~ 18 KB · 2 .sql files
ZIP
≈ 5 min · 1 script, 4 prompts
Feeds
3
📈
Phase C · OAC Workbook
Build the tuning dashboard

Point OAC at the view, build a workbook, and start the weekly tuning loop — synonyms, sample questions, supplemental instructions, knowledge documents.

  • Create a Dataset on OAC_FEEDBACK_WITH_LSQL.
  • Build visuals: 👎 rate, top miss patterns, LSQL drill-through.
  • Review weekly — tune the right layer, re-publish, close the loop.
🎯 Tuning Loop
Built in OAC · no download Follow Phase C guide below ↓
Open guide
≈ 5 min · then ~15 min/week
Phase A

Setup Capture Layer with Resource Manager

Provision OCI capture layer via Resource Manager — packaged ZIP, click Apply (8 steps)

🎯 Packaged solution

The Terraform bundle is pre-packaged and ready to upload. You don't build it — we ship it. The ZIP has the correct structure (.tf files at root, schema.yaml included).

Download oac-feedback-phase1.zip from the GitHub release. Save it anywhere convenient — Downloads or Desktop is fine. You will not unzip it.

Where: github.com/oracle-samples/OAC-Logs-Automation Releases Latest Assets oac-feedback-phase1.zip
🔍 What is Resource Manager?

OCI's managed Terraform service. Runs Terraform inside OCI — no install on your laptop, authenticates you via your console session, stores state for you.

Sign in at cloud.oracle.com. Open the hamburger menu and go to Developer Services → Resource Manager → Stacks. Set the compartment filter to where you want the stack record to live (separate from where the stack's resources will be created — that's set in Step 4).

Path: Developer Services Resource Manager Stacks
You should see the Stacks page. Ready to create the stack.
📦 What's a stack?

A stack = a Terraform configuration plus its variable values saved in OCI. Click buttons on it to Plan, Apply, Destroy.

Click Create stack. On the Create stack page, in order:

  1. Under "Choose the origin of the Terraform configuration", select My configuration.
  2. Under "Terraform configuration source", select .Zip file (default is Folder).
  3. Click Browse and pick oac-feedback-phase1.zip.
  4. Leave Custom providers unchecked.
  5. Name: oac-feedback-phase1
  6. Description: OCI infrastructure for the OAC AI Assistant & AI Agent tuning pipeline
  7. Pick a compartment, leave Terraform version at latest, click Next.
💡
Because schema.yaml ships inside the ZIP, OCI renders a clean form in the next step instead of a raw list of variables.
📝 Four sections

Resource Manager reads schema.yaml and shows the form in four parts: Location, Target Resources, Naming & Retention (optional), Hidden (do not edit).

Fill these required fields:

Tenancy — usually pre-filled with your tenancy OCID
Region — dropdown, matches where your OAC and ADB live (e.g. us-ashburn-1)
Compartment — the compartment that will hold the bucket, log group, Service Connector, and policies
OAC Instance OCID — paste; must start with ocid1.analyticsinstance.
Autonomous Database OCID — paste; must start with ocid1.autonomousdatabase.
💡
Naming & Retention fields (Resource Name Prefix, Bucket Name, Object Key Prefix, Log Retention) all have sensible defaults. Leave them unless your org mandates specific names.
⚠️
The Hidden section (user_ocid, fingerprint, private_key_path) stays empty — OCI injects your login session instead of API-key auth.
🧪 Why plan first?

Plan = terraform plan. Creates nothing. Shows exactly what Apply would do so you can catch typos and permission gaps before provisioning anything.

On the Review page, make sure Run apply is NOT ticked. Click Create — Resource Manager creates the stack record. From the stack detail page, click Actions → Plan, name it initial-plan, click Plan. Status moves Accepted → In Progress → Succeeded in 30–60 seconds.

OUTPUT
Plan: 6 to add, 0 to change, 0 to destroy.
6 to add means Terraform is happy with your variables. Move on to Apply.
⚠️
If the job fails, open Logs and search for Error. Most common causes: typo in an OCID, or your user lacks a permission the plan needs. Click Edit (top right) to fix the variable, then re-run Plan.
⚡ Creates real resources

Apply = terraform apply. Provisions the complete capture layer. Takes about 2 minutes. (Resource-by-resource breakdown in A-8.)

Click Actions → Apply. In the dialog: Apply job name initial-apply, Apply job plan resolution Automatically approve. Click Apply. Watch status move Accepted → In Progress → Succeeded (~2 min).

OUTPUT
Apply complete\! Resources: 6 added, 0 changed, 0 destroyed.
Capture layer is live. OAC feedback events from your AI Assistant and AI Agents are now being batched to Object Storage every 60 seconds.
🔑 One string Phase B needs

Everything from Phase A is available in the OCI console, but this URI is the only value the database phase requires. Keep it handy.

On the successful Apply job, switch to the Outputs tab. Find adb_location_uri. It looks like:

OUTPUT
https://objectstorage.us-ashburn-1.oraclecloud.com/n/<namespace>/b/oac-feedback-logs/o/oac-ai/

Click the copy icon next to the value. Paste it somewhere you can retrieve it later — a text file, a password manager note, anywhere safe.

💡
You'll paste this URI into Step B-4 of Phase B (v_uri_prefix).
🔍 Sanity check — 90 seconds

Apply-succeeded means all 6 resources are created, but spot-checking them in the console builds familiarity with where each thing lives.

Confirm each resource exists:

ResourceWhere to find it in the console
BucketStorage → Buckets → oac-feedback-logs
Log groupObservability & Management → Logging → Log Groups → oacfb-oac-loggroup
OAC service logInside the log group → oacfb-oac-diag-log
Service ConnectorObservability & Management → Connector Hub → oacfb-oac-log-to-bucket (state: Active)
Dynamic groupIdentity & Security → Domains → Default → Dynamic groups → oacfb-adb-rp-dg
IAM policiesoacfb-adb-read-bucket and oacfb-sch-write-bucket
Capture layer verified. Phase A complete — continue to Phase B to install the database pipeline.
Phase A complete · Capture layer is live

Your OAC AI feedback — every utterance, response and 👍 / 👎 — is now flowing from OCI Logging into an Object Storage bucket, timestamped and structured. Next: land it in Autonomous Database where the weekly tuning loop lives.

Next up
Phase B · Database
🗄️
Phase B

Install the Database Pipeline

One-shot Cloud Shell install — .sh + 2 .sql files

🎯 Packaged scripts

Phase B ships as three files you run in Cloud Shell: one install script and two SQL files. Download them once, run them once.

Download the three files from the release page. Save to a folder on your laptop or straight into Cloud Shell:

  • install_oac_feedback_pipeline.sh — one-shot installer (runs in Cloud Shell)
  • oac_feedback_pipeline_admin_setup.sql — creates the OACFB DB user
  • oac_feedback_pipeline_install.sql — creates the ingestion procedure, views, and scheduler
💡
The .sh script calls both .sql files in the right order and handles all prompts. If you prefer running SQL manually, see B-5 below.
☁️ Why Cloud Shell?

Cloud Shell is a browser-based Linux shell inside OCI, pre-authenticated with your OCI session. No keys, no SSH, nothing to install.

In the OCI console, click the Cloud Shell icon (top-right, looks like >_). Wait a few seconds for it to initialize. Then upload the three files:

  1. In Cloud Shell, click the gear icon (top-right of the shell pane) → Upload.
  2. Select all three files you downloaded in Step B-1. Wait for upload confirmation.
  3. Verify with ls -l install_oac_feedback_pipeline.sh oac_feedback_pipeline_*.sql
☰ OCI Console → Cloud Shell icon → Upload
⚡ One command, 4 prompts

The installer creates the OACFB user, drops in the SQL pipeline, and pastes your adb_location_uri into the ingestion procedure. You answer 4 prompts.

Make the script executable and run it:

BASH
chmod +x install_oac_feedback_pipeline.sh
./install_oac_feedback_pipeline.sh

It will prompt you for:

ADB OCID — your Autonomous Database OCID
ADMIN password — the ADB ADMIN password (hidden input)
OACFB password — a new password for the OACFB schema user (hidden input)
adb_location_uri — the URI you copied in Step A-7
⚠️
Password prompts are hidden (nothing prints as you type) — that's normal. If you mistype, press Ctrl+C and re-run.
When the script exits with 'Installation complete', the OACFB user is created, the pipeline is installed, and the hourly scheduler is armed. First data appears in ~5 minutes.
🔍 Is data flowing?

Two quick SQL checks confirm the end-to-end path is working. Run these in Database Actions as the OACFB user.

Sign into Database Actions as OACFB with the password you set in B-3. Open SQL Worksheet and run:

SQL
-- Check 1: recent ingested rows exist
SELECT COUNT(*) AS rows_ingested
FROM OAC_FEEDBACK_WITH_LSQL
WHERE ts > SYSTIMESTAMP - INTERVAL '1' HOUR;

-- Check 2: no unknown buckets, LSQL correlated
SELECT event_bucket, COUNT(*) 
FROM OAC_FEEDBACK_WITH_LSQL
GROUP BY event_bucket;
If check 1 returns a non-zero count and check 2 shows no unknown bucket, ingestion is healthy. Move on to Phase C.
⏱️
Cadence is configurable. The ingestion interval is controlled by the Service Connector you provisioned in Phase A. It can be dialed down to once per minute for live troubleshooting, but leave the default cadence for production — minute-level polling is noisy and unnecessary. To change it: OCI Console → Observability & Management → Connector Hub → oacfb-oac-log-to-bucket → Edit → adjust the batch interval.
⚠️
If check 1 returns 0: wait another 5 minutes (the first run may not have landed yet on the default cadence). Still nothing after 15 minutes? Check that the Service Connector from Phase A shows state 'Active' and the bucket has .gz log files.
🛠️ Prefer running SQL by hand?

Cloud Shell is the fastest path but not required. If your policies block it, or you want to step through the pipeline SQL yourself, use the manual flow below.

The manual path runs the two .sql files directly in Database Actions. It's the same end-state as B-3 — same user, same procedure, same scheduler.

  1. Sign into Database Actions as ADMIN (not OACFB — OACFB doesn't exist yet).
  2. Open SQL Worksheet, paste the contents of oac_feedback_pipeline_admin_setup.sql, edit the v_oacfb_password line to set your chosen password, run as script. This creates the OACFB user with the right grants.
  3. Sign out of ADMIN. Sign back in as OACFB with the password you just set.
  4. Open oac_feedback_pipeline_install.sql. Find the line v_uri_prefix := '...'; near the top. Paste the adb_location_uri from Step A-7 between the single quotes.
  5. Run the whole script as script (not as statements). It creates: the OCI credential, the SODA collection, the watermark table, the ingestion procedure, the feedback/LSQL views, and the hourly scheduler job.
  6. Run the two verify queries from Step B-4 above to confirm the pipeline is live.
💡
If you ever want to re-run the installer to pick up a new adb_location_uri or reset the scheduler, you can — the installer is idempotent on the credential, watermark, and job.
Phase B complete · Pipeline is ingesting

Your feedback is now landing in Autonomous Database on a scheduled cadence — queryable via OAC_FEEDBACK_WITH_LSQL, with every 👎 correlated to the exact LSQL the AI generated. Next: turn that signal into a tuning workbook Authors open every week.

Next up
Phase C · Tune
📊
Phase C

Build the OAC Insights Workbook

Connect OAC to ADB, create the dataset, build the tuning workbook. (3 steps)

🔐 Connect OAC to your database

OAC needs to connect back to your database to access the views and tables created by the installer in Phase B. Use OAC's Database Connection wizard with the Autonomous Database service name, OACFB user, and the password you set during install.

In OAC, go to Console → Connections → Create. Pick Oracle Autonomous Data Warehouse or Autonomous Transaction Processing (match your ADB type). Upload the Autonomous wallet, then enter: OACFB as the username, the password you set in Phase B, and the service name from the wallet (usually <dbname>_high or <dbname>_medium). Test the connection and save.

💡
The OACFB user already has SELECT on the two views created by the installer — OAC_FEEDBACK_CLEAN and OAC_FEEDBACK_WITH_LSQL. No additional grants needed.
📊 Build from views

A dataset in OAC is built from a database view. Point OAC at OAC_FEEDBACK_WITH_LSQL — every column is analysis-ready.

In OAC, go to Data → Create and choose Dataset. Select the connection from C-1, then pick the view OAC_FEEDBACK_WITH_LSQL. Recommended starter visualizations:

  • Feedback Sentiment Trend: Timeline of positive / negative / neutral feedback over time
  • Top Negative LSQL Patterns: Bar chart of most-criticized LSQL queries
  • Feedback by Data Model: Which data models draw the most feedback
  • Response Time vs. Satisfaction: Scatter of elapsed_time vs. feedback sentiment
🔬
Advanced tuning technique · from the Oracle blog
Trace a thumbs-down all the way to the generated LSQL

Start from a negative feedback row and note the event_time and parent_ecid. Look for that parent_ecid value in the ecid column, sort by event_time ascending — the next row contains the LSQL the AI Assistant generated for that utterance. Validate by matching the data-model name. That's the single most useful pattern for converting "user hated this" into "here's the synonym / instruction to fix."

🎯 From signal to fix

A workbook turns the dataset into a live tuning loop. Publish it, share it with the AI owners, and come back to it every week to see the impact of your tuning.

From the dataset, click Create Workbook. Lay out the four starter visualizations from C-2 across two canvases: the first as a daily health check, the second to drill from a thumbs-down to the generated LSQL. Most common symptoms and where to fix them:

Symptom in the workbookLikely root causeWhere to fix in OAC
LSQL uses wrong column for a common wordMissing or ambiguous synonymData model → column → Synonyms (AI Assistant)
Agent answers off-topic or breaks toneSupplemental instructions too thinAI Agent → Instructions (add examples & guardrails)
"No data" returned for valid questionData model scope missing the subject areaData model → Tables / Joins
Same utterance → different LSQL over timeSample questions drifting the plannerAI Assistant → Sample Questions
Make one tuning change at a time, then watch the same visualization for 24–48 hours. The feedback pipeline runs on its default cadence — fast enough to confirm the fix landed without being noisy.
🚀
From pilot to production

This is how AI moves from pilot to production — with confidence.

Capture is live. The pipeline runs on its scheduled cadence. Every 👎 is an evidence-backed fix. Every week, accuracy compounds. Every tuning decision has a named owner and a traceable trail — the difference between a feature your users tried once and a conversational analytics platform they rely on every day.

That's the goal: deliver the feedback tuning dashboard to Authors and domain experts so they can identify an incorrect answer, confirm which column the prompt mapped to, correct the synonym or supplemental instruction, and re-publish — the same day. No tickets. No wait states. No guesswork.
Closed loopEnd user · 👎Dashboard & LSQL traceAuthor fixes the right layerRe-published to production
Further reading