Hacker Newsnew | past | comments | ask | show | jobs | submit | solias99's commentslogin

Thanks for mentioning this, but it's definitely not what we're trying to achieve.

We may have underplayed a little, only because we have existing users already finding value with our current setup.

The goal is more like v0.dev, where with a prompt, you can generate your entire internal tool. We think we're not too far away from this (as shown in the YouTube demo).


That's a better description and a better goal


Thanks.

This is something we're actively working on fixing. That's where we're going with the project as a whole!


Makes sense! I would say flip your video and put the AI stuff upfront. At least then people can see what you have/where you're going.

more of a "Use AI to generate internal tools" (the language, auth, UI, etc. could be anything. even though react is a great start.)

In time you could generate the stuff in any language people want "use the language you love instead of constricting no-code or inflexible boilerplate starters"

The boilerplate is less than valuable because you're competing with already established open source communities that "will be around long after you go out of business or pivot" (quote because of sentiment but not because I think you're going out of business :) )


Great feedback, thanks!

We mostly shrunk away from pitching the AI because we thought people will find immediate value in the starter and be able to use it (like our existing users/customers), but in hindsight that was a mistake, especially considering the AI is already decently valuable.


I'll chime in with my personal experience: I spend about 50-60% of my time in GPT-4 just pasting code, prompting for what I want and then pasting it back into my code editor.

If this is my workflow most of the time, then surely this can be more streamlined. The way to streamline it would be:

1. Can I tell GPT-4 about my components (and always be up to date)

2. Can I ask GPT-4 to write to the file directly

3. If the output has an error, can I ask GPT to look at the compile time error and auto fix it?

The 3 things I mentioned above is something I do every single day. We're just looking to make it easier.


Cursor.sh, Continue, and (probably? i dunno) Copilot do those things well.


Mainly their implementation of React Server Components, which is a far simpler set of APIs to work with compared to traditional React. (Also very LLM friendly)


Why are RSC more LLM friendly that traditional React?


1. Very clean, easy to understand APIs.

2. Easy to make API calls.

3. Easy to reason about server/client component boundaries

Makes it easy for an LLM to write code without all the cruft/boilerplate.


Once you install our starter via this:

npx creoctl@latest init

cd into the project, download the dependencies, and npm run dev.

On port 8891, you can open any of the tools (or create a new one) and a chat box appears on the right. There you can ask for what you want!


I implore you to try the product.

The value is in being able to prompt AI something like "Here's my data that comes from endpoint X. Give me a table that shows this data that is also searchable."

I personally find that valuable (along with not having to think about which auth vendor and component library to use).


I can do that manually with existing tools in half an hour. Endpoints change very rarely, once you setup your "view" of that endpoint you are mostly done for a long time, with some smaller changes over time.

Also if I wanted to go AI route, I can ask Cloude Opus or GPT4 : "Here is an endpoint that returns this type of data <paste data type here>, make me a React component that fetches this data and presents it in searchable table using shadcn"

It will get me 80-90% there, it would need just a little manual tweaking to conform to the current project code standards.


This is the exact thing I've personally tried several times and it doesn't work well. But it works with a prompt in our app right now.

Some reasons listed below:

1. Gpt hallucinates shadcn's big data table implementation more often than not. Also, you don't have props for everything by default (page size, filtering, sticky columns etc)

2. We have certain rules we ask our AI to stick to, such as always putting a table in a card and consistent padding. So there's lesser cognitive effort in you thinking about how to style your tools.

3. Current LLMs are shaky with next14 right now, especially on where to use server and client components.


> The value is in being able to prompt AI something like "Here's my data that comes from endpoint X. Give me a table that shows this data that is also searchable."

This is a billion dollar idea even if at like a 70%+ success rate. And as a data engineer, I assure you the success rate is not there. There are so many edge cases, so many caveats, so many headaches; an AI generated solution will only work in the perfect API scenario (which never happens in the real world).


Just like Retool (or any other platform), you're expected to bring your own endpoint (but you're also free to write it in our app if you want).

The value will be in closing that feedback loop:

1. Here's the shape of the data coming in from endpoint /api/xyz. Currently, generating an endpoint with AI with this much nuance is very low in terms of success rate (and I think is what you're talking about).

2. However, once you know the endpoint and what the data looks like, our tool becomes valuable because you can just specify in natural language, which component you want to use, and how it has to be laid out on the page. Several people will prefer natural language over dragging and dropping UI elements on a screen. And the success rate is far higher for something like this.


All you do is write a single page in the tools folder and deploy.

We embed those tools in our parent web app, which has authentication, team permissions etc. So you will never interface with the code that does that.

You just have to focus on your business logic.

In terms of the AI copilot, the main value prop is knowing about our components, and using them to stitch together something fast. (Instead of the AI writing the component from scratch.)


Will look into this, thanks for mentioning!


Mainly it's that the AI knows about our component library, so you don't have to inject context all the time.

And we try to be meticulous about our components' API design, that's where we think most of the value accrues.


It is a difficult space for sure.

We think the value is not just in one thing or the other, but being very focused on the whole: component library, framework, etc. and not giving options.

The reason is because it ties together nicely when you prompt our AI to generate the tool you want.

When you say "Give me a table on data X and a bar chart that is groups field X on the same data", there is no cognitive effort expended in thinking about which component library to use, getting AI to know about said component library, and writing code straight into your project.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: