OAC AI Feedback Tuning Pipeline.
Set up in under 15 minutes — graduate Oracle Analytics Cloud (OAC) AI Assistants & Agents to production with confidence.
Turn every 👎 into a tuning signal — and graduate OAC AI Assistants & Agents from pilot to production.
Without a feedback loop, AI pilots stall. Authors can't see which answers failed or why. Tuning becomes guesswork. Users stop trusting the Assistant and Agent. Analysts remain the bottleneck.
A packaged 3-phase pipeline captures every 👍 / 👎 on your OAC AI Assistants & Agents — together with the prompt and the generated Logical SQL — into an OAC workbook. Authors review, tune the right layer (synonyms, instructions, knowledge documents), and re-publish the same day.
A feedback tuning pipeline for OAC AI
- Captures every 👍 / 👎 with the prompt + the generated LSQL.
- Lands it in Autonomous Database on the connector's scheduled cadence.
- Surfaces it in an OAC workbook the Author opens weekly.
OAC AI knows your schema — not your business
- Without a feedback loop, authors can’t see which answers failed or why.
- Tuning becomes guesswork; users stop trusting the Assistant and Agent.
- The pilot stalls. Analysts remain the bottleneck.
Production-grade AI, tuned weekly
- Self-service delivered. 10-second answers, not 3-day BI requests.
- Evidence per release. Every fix traced to a real 👎 and the exact LSQL.
- Same-day remediation. Author refines, re-publishes, closes the loop.
Browser-driven, end-to-end
- A · Capture. Upload a Terraform ZIP to Resource Manager. ~5 min
- B · Land. Run one script in Cloud Shell. ~5 min
- C · Tune. Build the OAC workbook and start the weekly loop. ~5 min
Set up the feedback tuning pipeline in under 15 minutes.
Three phases, fully browser-driven. Each phase below shows what to do and the exact artifact to download — right where you need it.
Upload the Terraform bundle to Resource Manager. OCI provisions the bucket, log group, Service Connector, Dynamic Group, and policies.
- Upload the packaged ZIP as a new stack.
- Select the target compartment and paste the OAC + ADB OCIDs.
- Click Plan → Apply. OCI does the rest.
Run a single shell script in Cloud Shell. It creates the OACFB user, installs the ingestion procedure, arms the scheduler, and builds OAC_FEEDBACK_WITH_LSQL.
- Upload the installer and the SQL bundle to Cloud Shell.
- Run the installer — answer 4 prompts (compartment, ADB, wallet, admin password).
- Installer is idempotent — safe to re-run.
Point OAC at the view, build a workbook, and start the weekly tuning loop — synonyms, sample questions, supplemental instructions, knowledge documents.
- Create a Dataset on
OAC_FEEDBACK_WITH_LSQL. - Build visuals: 👎 rate, top miss patterns, LSQL drill-through.
- Review weekly — tune the right layer, re-publish, close the loop.
Setup Capture Layer with Resource Manager
Provision OCI capture layer via Resource Manager — packaged ZIP, click Apply (8 steps)
Your OAC AI feedback — every utterance, response and 👍 / 👎 — is now flowing from OCI Logging into an Object Storage bucket, timestamped and structured. Next: land it in Autonomous Database where the weekly tuning loop lives.
Install the Database Pipeline
One-shot Cloud Shell install — .sh + 2 .sql files
Your feedback is now landing in Autonomous Database on a scheduled cadence — queryable via OAC_FEEDBACK_WITH_LSQL, with every 👎 correlated to the exact LSQL the AI generated. Next: turn that signal into a tuning workbook Authors open every week.
Build the OAC Insights Workbook
Connect OAC to ADB, create the dataset, build the tuning workbook. (3 steps)
This is how AI moves from pilot to production — with confidence.
Capture is live. The pipeline runs on its scheduled cadence. Every 👎 is an evidence-backed fix. Every week, accuracy compounds. Every tuning decision has a named owner and a traceable trail — the difference between a feature your users tried once and a conversational analytics platform they rely on every day.