Taste in Software Development & AI Tools

In the last few weeks, the concept of ‘taste’ has emerged in a lot of the discussions I’ve had around AI tools. It’s an interesting word that implies a sense of knowing what looks good & how to style things. It makes me think back to curating Pinterest boards and obsessing over how the pins look together.

Pinterest is a digital mood board – but I think for many of my friends, it was a transitional tool before the Instagram grid came around. Selecting images of beautiful interiors to pin to your “future dream house” board, and Tom Hiddleston images to your “cute” board. Then reorganising: moving pins between boards either because they were in the wrong place, or fit better elsewhere.

Being trusted by a friend to share a board, and finding that they surreptitiously un-pinned an image was a subtle indication that your taste hadn’t matched their image of what the board symbolised.

Watching friends begin to obsess over their Instagram grid was a strange phenomenon. It started with ensuring the row of pictures fit well together, and then there was white outlines added to images or posts in quick succession to ensure that the colour scheme change was complete in time.

I gave up a long time ago at curating a perfect image of myself on social media. Yet as I step further into the tech world – building products that people want to use, I periodically return to this concept of designing things that look good.

Good design can make up for a lot of flaws in the product. It allows you to choose what you want the user to focus on, and to clarify and accentuate the actions you want the user to take.


Training My Design Eye

I’ve made some efforts to ‘train my design eye’ over the past few years. With every small tip I’ve learnt, I’ve come to respect designers more.

An incredible friend I met on a train is a graphic designer, and over the years I’ve observed how careful and intentional she is in curating the world around her. In both a digital and physical sense, she invests time into surrounding herself by examples of great design.

At this moment in time, August 2025, the focus of the wider world seems to have shifted to “all AI generated stuff looks the same”, with the gradient blue/green colours and font etc.

With AI design, the old IT saying of “Garbage In, Garbage Out” rings true.

On Reddit, I found this quote:

“Taste—rooted in intuition, culture, and emotional resonance—is the differentiator that technology can’t replicate.”

Whilst I believe this is true, I do believe it can get pretty close.


Taste in AI Outputs

LLMs (Large Language Models) are trained on a wide dataset, and in response to your message – they return the most probable reply based on their historical dataset. This is a simplification in some respects. Yet, as I’ve started working more with LLMs via APIs – it has become more true as I’ve realised that:

Your system prompt is the deciding factor between a valid response or a great response.

The tweaking of a word or two, a sentence more or a sentence less, makes a larger difference than expected. Sometimes in unexpected ways. Making a change then transforms your predictable reply into something with new issues.

The LLM acts as a mirror to human thought. If you can distill the process of creating a design eye and provide the LLM with good input, you can create something that looks good.

As a small disclaimer - I’m uncertain if I’ll ever get an LLM to generate ‘great designs’ as the element of originality and of combining concepts requires experimentation that is built by human experience.


Experiments to Improve AI’s “Design Eye”

In the last few weeks, I’ve been experimenting with different ways of improving the design eye of AI tools. Here’s what I’ve been up to:

  • Tailwind CSS MCP Server
    I’ve created an MCP server for Tailwind CSS. I’ve connected this to Cursor (but it can be connected to any LLM agent you’re coding with), and it gives the LLM help in generating Tailwind classes that look more interesting.
    The next thing I want to try here is augmenting the MCP server with a UI library like Material Design. This will give it a set of guidelines and framework to create components that consistently look good.

  • Feeding Examples of Good Design
    I’d been thinking about the set of examples that you can give LLMs to generate better designs. I gave my LLM agent images of good designs to replicate. This has led to very mixed results, and needs further thought.

  • Collating and Distilling Good Design
    I’ve started collating images of good design – and distilling what makes them good. I then give this context into the system prompt to create images that look more similar & fit with this style.
    The main blocker with this is that it depends on my design knowledge, and ability to describe features.

  • Figma as a Design Aid
    I’ve seen more tools using Figma to generate frontends, but again this is limited by my decent but not expert level of Figma knowledge. This returns me to the catch-22 of needing to improve my own design eye to improve my LLM coding agent’s design eye 😄.


The Goal

I eventually want to have my own agentic setup that I can outsource the design responsibility to whilst coding. A system that can be repurposed across projects and remove myself as the primary eye of the design. Instead, I can have a self-enforcing setup of LLMs with a design system.

My only blocker is needing to upgrade my existing taste.