Rajiv Ayyangar

What's your Replit "vibe coding" workflow? (I'm struggling here...)

8

For Valentine's day I used @Replit to make an app (PWA) that lets me and my partner "hug" via notifications, and has an exploding heart when we're both touching a button at the same time.


What I'm stoked on:

I was able to get a prototype in minutes even though it had a lot of flaws. It took me a couple of sessions over a few days to build an app that kind of works. This was pretty mind-blowing, as my coding ability is fairly rudimentary (background in data science and I've done some basic javascript but never built a standalone app).

Using @Wispr Flow sped things up and also was impressive - It's cool to see the little touches that Wispr has implemented to improve dictation, and I'm just impressed by the speed and fluidity of using it on desktop. As a side note, using Wispr has made me increasingly frustrated at how poor iOS's built-in voice dictation is.

What I'm frustrated by:

I had to restart building the app a couple times because when I would try to get it to fix certain things, it would create more problems. It felt like working with a very fast but extremely inexperienced developer who has no sense of when they're off on a dead-end in the maze.

I haven't developed an intuition on when to use agent and when to use assistant. Replit shows how the agent is more expensive than the assistant, which stressed me out a bit. Although I realized I never hit the threshold where it started charging me more than my normal subscription. So I think it was just a psychological thing.

It had repeated errors around web socket connections and dealing with notifications on iOS. I had to get deeper and deeper into these myself, and in one case, teach it how to do notifications properly on iOS. It turned out that Replit's information was outdated, and it thought that it is not possible to do native notifications with a PWA on iOS. So I had to actually get code from ChatGPT to teach it.

It was stressful to make a change knowing that I could break and was likely to break many other things, and I didn't know how to properly create a checkpoint that I could fall back to. It creates checkpoints all the time, but I couldn't figure out how to name a checkpoint or save it in a way that was easily recoverable.

Because it takes some time for Replit to compile, I've found myself doing other things like work, little tasks, or even watching a TV show while waiting for Replit. But then I had to keep checking because it wouldn't send me a notification. I wonder if there's a way to send notifications?

Help me make my workflow better! What am I doing wrong / what could I be doing that I'm not doing?

Add a comment

Replies
Best
Matt Carroll

Here are some things ive used for working with AI specifically, but not necessarily specific to replit:

1. (you touched on this already)
learn to commit your work early and often when using AI. (aside: it would be cool to make a tool that just generates a commit every time you prompt the ai so you can easily go "back in time" in the code based on your chat history with the bot...)

by saving work often you can just revert and not have to declare "ai bankruptcy"

2.
have the assistant / bot / intern write tests for each step. try to make them really simple and easy for you personally to reason about.

again not sure about the flow specifically with replit, but i think tools like windsurf can run tests or leverage them. at a minumum they can help you figure out where things broke and recover from a stumble.

3.
all complex systems grow from simple systems. I often find that ai will give me a "far too polished version" out of the gate, that is ridden with bugs. Lets say i want to generate a chart in my app.

There are maybe 4 steps - 1. fetch the data, 2. organize it for the charting library 3. make the chart itself 4. put the chart onto the page.

my flow for this with ai would look like:

make me a bar chart with dummy data that shows daily views, having two columns: mobile and browser <-- prompt 1

put the chart into my page: index.tsx <-- prompt 2 (and visualize)

-- iterate until chart looks reasonable --

fetch the data from this place and print it out <-- prompt 3

restructure the data to the input format for our bar chart <-- prompt 4

plug the data into the chart itself <-- prompt 5

the key gist is always go from working state -> working state when you iterate.

Hope that helps!

Chris Messina
Top Hunter

Extremely relatable!


I'm working on adding posting capabilities to my Threads for Raycast extension, and @Windsurf is completely confused about how to hook up the Threads API to the Raycast API using Raycast's OAuth service.


I've tried a number of methods from letting Windsurf write the entire flow (got close, but failed on Raycast's end) to ripping all that out and forcing it to learn from other similar extensions, only to fail — largely because of what you're reporting... the training data knows how to do "generic OAuth", but not OAuth that works between Raycast and Threads.


It's super frustrating. I want to be able to check in what I'm working on so someone on the @Raycast team can review it, but trying to make sense of their massive GitHub repo is just slightly above my desired effort threshold.


The funny thing is @Claude by Anthropic wrote most of the first (and functional!) version of the extension... but now that I'm using a coding editor, I'm having LESS success. 😡

Rajiv Ayyangar

@chrismessina do you start over a lot? Or plod through generally. I'm trying to get better at figuring out whether to:
1) start over
2) persist
3) (ugh) dive in and learn the details myself (usually assisted by another foundation model).

Chris Messina
Top Hunter

@rajiv_ayyangar all three... I'm on my 4th attempt to get this OAuth connection written... I keep every iteration so I can return to prior attempts as needed, or as I gain insights or get more familiar with concepts, can return to some of the more esoteric (yet scalable/maintainable/proper) ways to do things.

For example, I'm also trying to use other people's libraries, but then I have to get familiar with how to integrate them into my codebase, but Raycast prefers cleaner/simpler code, so if the library isn't available via NPM, I have to rewrite it anyway.


So — my lingering question is: should I bite the bullet and take something like a Typescript 102 course or sit down to watch a few youtube videos or just keep grinding at it, hoping that some combination of persistence and randomness will reveal the ultimate cheat code.... 🤷🏻‍♂️

Robin Prime

When I hit a roadblock, I Google smart, read error properly and ask AI for context.

Michael Tchong

I also ran into constant problems. The most remarkable discovery was that it simply could not code a bullet-proof login system, something every SaaS developer needs! After about a dozen tries, I left Replit and now use Co.dev, which actually works!

Rajiv Ayyangar
Yeah, hopefully it will get better at some of the standard features. I haven’t tried co.dev! I’ll check it out.
Kody Low

Hey Rajiv, thanks for the feedback. Going to try addressing as many of the points you made above, let me know if you have further issues or questions about them. Lots of these are just product improvements on our side that are landing soon, but based on your descriptions here's some info that might be helpful for improving your vibe coding experience.

Avoiding the Agent Doom Loop: the state you're describing where you ask agent to do something, it doesn't or it breaks it, you ask it to fix it, it breaks it more, etc, is something we're extremely focused on fixing. The number of times this happens goes down every week as the Agent improves and models generally get better, but when the agent does checkpoint at a bad point, you should rollback as early as possible then re-attack with additional info in your prompt about what NOT to do instead of going down a doom loop.

There's a couple product / architecture improvements coming in the next week or for Agent that should see significant improvements by default, but some additional context that might be helpful to you:
1. Why it happens: The Agent (language models generally) is worse at fixing something it broke vs doing it correctly on the first shot. When you ask the agent to do something, it doesn't, then you follow up with a "it's still not working" or a "fix it", or something, unless there's explicit console logs Agent won't have sufficient information about WHY what it just did isn't working, so it will try to keep what it did and write more code which tends to compound the problem.
2. Things that'll address it, all coming soon: better debugging and reasoning steps, better identifying a loop state behavior and bringing in a reasoning or stronger model to debug, backtracking and reattacking the problem with a rollback instead of trying to build on the broken code

3. What you can do right now to have a better experience: If you ask the agent to do something and after working it breaks something, you should always hit rollback. Checkpoints are git commits tied into any database changes or updates that the agent does during the step. Never keep a non-working checkpoint, always hit rollback. Then repeat the prompt with additional information about what NOT to do or about what error to avoid trying to make.

Model Knowledge Cutoffs:
1. Why it happens: The Agent uses a mixture of models. Most of the code writing is done with with the GOAT Claude-3.5-sonnet-v2. Its knowledge cutoff is from several months ago. When you ask it about recent things, it sometimes gives outdated info. Replit maintains a bunch of Integrations with commonly used tools, apis, and frameworks that the Agent will reference to get more up to date info on correct usage, but if it's outside those integrations it's limited by the model knowledge cutoff.
2. Things that'll address it, all coming soon: more integrations, some mobile specific improvements, agent search capabilities to find up to date info.
3. What you can do right now to have a better experience: You can feed in documentation and information to the Agent by passing website links or just pasting files into the "agent_assets" directory. We're trying to keep things up to date as best we can, but there'll be a lag purely based off the model training cutoffs that's kinda unavoidable.

Distinction Between Agent and Assistant:
As the Assistant gets better at doing more complex things and Agent gets better at identifying when it's a simple thing, the 2 products grow toward each other and might eventually merge. Currently the distinction exists because it's hard to classify the complexity of the user's intention, so that classification is done by the user's use of one or the other tool.

In Pair Programming terms, Agent is the driver and Assistant is the navigator. When you're using one, you have to act as the other for it.
When using Agent you have to be a good navigator: Focusing on giving clear instructions, spending more time thinking about what you ask it to do before it does it, and stopping it before it starts going off in a bad direction.
When using Assistant you have to be a good driver: Focusing on getting clear instructions from the Assistant before taking an action, ensuring you understand what and why you're doing the things the Assistant is instructing you to do.
A good vibe-check for which one to use is if you want to do something across multiple files / multiple features that's additive (new feature), use the Agent. If you're editing or removing something (tweak this page, remove this button, etc), use the Assistant.

Overall Biggest Recommendation: If Agent/Assistant Break Something -> Rollbacks, Rollbacks, Rollbacks
Overall the biggest recommendation is to make fast, liberal use of rollbacks. The Agent will get better every day especially with new models so the number of times it breaks something will go down. But when it does break something, always always always hit rollback as quickly as possible.

There's a bunch of improvements coming to this soon, even letting the Agent try to recognize when it should do it itself, but trying to be clear: Checkpoints are git commits tied to database versions, hitting rollback will do a perfect restoration of your app at the point the checkpoint was made. You're not going to break anything by doing it, what you'll do is clear out the broken code and give the agent a clean opportunity to retry which will be much higher success.