personal

Greenline Landscaping

A 2-day proof of concept for an AI-assisted intake workflow. Natural language quote requests in, structured job records out.

The problem this explores is one that comes up constantly in small service businesses: a customer calls or messages with a vague description of what they need, and someone on staff has to translate that into a structured job record: service type, property size, access constraints, timeline, materials. It is tedious, error-prone, and happens dozens of times a week.

The question I wanted to answer in two days: can an LLM embedded in a structured workflow handle that translation reliably, with a backend owning all the business logic?

What it does

A customer submits a quote request in plain language. The workflow extracts structured job details using an LLM with constrained structured outputs: no freeform responses or open-ended generation. If required fields are missing, the system asks one targeted follow-up question at a time and recomputes readiness after each response. When the record is complete, it surfaces in an admin review interface.

The key constraint is that the LLM only handles extraction, parsing what was said into structured fields. All decisions about readiness, status transitions, and what gets written to the database live in backend logic.

The Temporal decision

Using Temporal for a two-day POC might look like over-engineering. The alternative was cron jobs or a simple polling loop, which I’ve used on previous projects and run into the same wall both times. Failure recovery can be finnicky, visibility into active state change is nonexistent, and retrying a step that partially completed is painful.

Temporal’s workflow model handles all of that, and the visibility into what state each intake request is in was a practical bonus. Having the workflow as the source of truth made the admin interface much easier to build.

What it demonstrated

The intake extraction works well for the kinds of requests a landscaping business actually gets. Structured outputs with a strict Pydantic validation layer before any business logic touches the data meant the LLM failures were caught early and handled predictably.

The one-question-at-a-time constraint on follow-ups was the right call. An unconstrained system that asks for everything missing at once looks like a form. The sequential approach feels closer to a conversation and gets better completion rates in informal testing.

I treated this as a scoping exercise from the start, validating whether the approach was sound before committing to a full build.

Thank you for reading ← You can click me!

Want to build something together?

I'm available for projects and open to full-time positions.

Get in touch →