101425
×40
WORLD
1-1

DISAPPOINTED WITH TRAE AND-OR GITHUB SPEC-KIT

2 MIN

The implementation of my sample project just ended. It took about 1 or 2 hours. It was difficult to measure because I was not in front of the computer the whole time, and from time to time, TRAE or GPT5 reported it had completed, when it did not. There were also another times when TRAE said that I had consumed the “Thinking Limit” or something, and I needed to hit “Continue”, which I honestly don’t know why.

Then there comes the output. Horrible. The UI did not make sense at all. I am building an AI-driven tool so there is a chatbot component, which kept almost at the center of the screen the whole time and use only the remaining of screen space to try to guess the rest of the functionality.

I believe the spec has enough detail for a better implementation. I also put some thought to the technical plan. The only part where I left the AI do its thing was in the task breakdown.

I know I am trying to do too much for a one-shot implementation since my requirements are complex. However that’s how the (one of the) creator(s) of GitHub Spec-Kit makes it look on their video. He pretty much does a one-shot spec, then ask the AI to implement it, and then an almost complete app comes out. I know it does not work like that, I know AI is not that great yet, but I honestly expected a bit better.

I will give this another shot because I have the feeling that either TRAE or GPT5 had something to do with this failure experiment. Just a hunch, I wouldn’t be able to say what it is. I have the feeling that Claude Code will do considerably better. Let’s see