Back

Real Time Agent

AI-Supported Contract Tagging

The contract tagging workflow relied on manual effort to process thousands of contracts per month. I led the UX of an AI-powered tagging tool that introduced a human-in-the-loop model, shifting users from performing tasks to validating AI-generated outputs.

Role

Product Designer

My Contribution

E2E Product Delivery

Cross-functional Team

Product, Tech, CX and Legal

Year

2025

The contract tagging process was heavily manual and relied on our internal tagging team to individually review and tag thousands of contracts each month. As volumes increased, this manual approach created a growing operational bottleneck - slowing down turnaround times, adding pressure on the team and limiting our ability to scale. The downstream impact was clear in NPS feedback: Agents were frustrated by contract turnaround times which impacted how quickly they could close deals.

The strategic opportunity was to shift tagging from a fully manual process to an AI-assisted workflow where the team reviews and validates tags rather than applying them from scratch. This would reduce delays, improve agent experience, and unlock scalability for NSW and QLD expansion.

I led the end-to-end design of an AI-powered tagging helper, collaborating with Product, Tech, CX, and Legal teams. My responsibilities included:

  • Defining the AI design principles to build trust and accountability
  • Designing the AI-assisted workflow from tag suggestion to review and feedback
  • Creating interfaces that balanced automation with human oversight
  • Aligning cross-functional teams on the MVP vision and rollout plan

AI as a collaborator, not a replacement

An AI-powered tagging helper. The model is trained on previously tagged contracts and automatically applies the tags. This allows the team to focus on reviewing and confirming the tags, rather than manually applying each tag.

Building AI capabilities people can trust

Designing an AI feature requires more than functional accuracy - it demands that users feel confident, in control, and able to course-correct. We grounded the design in five principles:

Trust

AI suggestions are editable and distinguishable from manual tags, with the ability to adjust, delete, or turn off AI features.

User Control & Explainability

The system learns from user corrections and feedback, allowing users to teach preferred rules; feedback loops help shape AI behaviour.

Co-Creation

A user-friendly interface enables users to actively contribute to the tagging process and collaborate with the AI.

Transparency

Confidence scores and reasoning are displayed to help users understand AI suggestions and reduce errors.

Accountability

Human-in-the-loop ensures oversight, giving users clear opportunities to review and intervene when necessary.

For long contracts (100+ pages), AI tags would sometimes appear incorrectly mid-document, making them easy to miss. Once users noticed this, they started to review every single page to ensure there were no random tags.

The AI was no longer seen as a time saver; it was seen as a liability. This reduced trust, and in some cases, led users to turn off the AI helper and revert to manual tagging.

“When the pages are more than 100, I usually turn off the AI tagging tool to avoid consuming time scanning all the pages for unnecessary tags.”

Insight

The time spent searching for AI mistakes outweighed the time saved by the tool. Without confidence in output accuracy, users defaulted to checking everything manually, eroding user trust and efficiency.

Key Decision

  • Introduced jump-to-next-tag navigation and keyboard shortcuts to reduce unnecessary scanning and allow users to quickly correct any errors if needed.
  • Implemented a confidence threshold (~80%) to suppress low-confidence tags and improve accuracy of tags surfaced.

Contract turn around time reduced from ~8 hours → ~50 mins

Tagging team stress reduced from 27.3% to 11.1% pre vs post AI

  • Shipped MVP Document Tagging App with AI-suggested tags applied by default, reducing manual effort.
  • Built continuous feedback loops to identify team pain points and improve AI usability and accuracy over time.
  • Enhanced team wellbeing and customer satisfaction: handling thousands of contracts per month is now more manageable for the team (post-release team survey confirmed reduced stress), while agents receive contracts faster, improving both internal operation sustainability and external customer satisfaction.

This project reinforced that impactful product design often happens before a single interface is built. Aligning multiple teams around an AI-assisted vision required storytelling, cross-functional empathy and strategic framing.

The future of operational processes at RTA isn't just about automation, it's about designing intelligent systems that empower teams, build trust and scale sustainably.