I moved my blig from LiveJournal here and to a static GitHub website. This time I mostly used Copilot to create a blog, this is a good opportunity to record current state of technology.

This is not going to be one more article "I vibe-coded something for the first time, wow-wow-wow". And not even "AI is going to replace me, we are doomed".

Of course I use professionally AI tools daily. It's just interesting to record what technology can do today and what it (yet) cannot.

Firstly, I am writing this text with support of AI (TAB-completion), and it really saves time. Compared to before, it guesses quite often, and especially good at correcting grammar. But still it can't guess correctly what I am going to write. It can though finish my jokes - which means they are not that good.

TAB-completion still offers too long and too wrong suggestions, so ignoring it or pressing ESC is a new skill.

Agents are great. I mean, really powerful now. But still not as good as a human.

I asked it to create a static blog with html and css, so that it can be hosted on Github pages. I included in the prompt that I will include it in my ASP.Net website, which will just index and parse html and display pages almost as is, adding comments and external styling. So styles should have distinct names and be reusable.

The results were quite good, though not perfect. Maybe it's because creating a blog is quite a standard task, so AI is trained on a lot of good examples. It created multiple files, well organised, all quite good.

Still, html was slightly overcomplicated. I had to simplify a bit to make it easier to write and reuse in ASP.Net. Honestly, one thing I kept in mind but didn't tell Copilot - for me the main reason to use Github is that I can create content in client from the phone. This really is simpler than creating any editor for a blog, but this meant that html must be extremely simple and css easy to remember and simple to reuse. Therefore I had to simplify css and html a bit myself, but there TAB-completion worked perfectly, it was enough to start and it helped to finalise.

AI is already better than me in creating UI styles. I mean, I always knew I am a terrible UI designer, and my website is still quite ugly, as you can see. I am a bit worried that it it too "standard", lacking "personal touch", but for a blog I consider it acceptable.

AI is extremely good at routine tasks - for example, I for some reason wrote this text as unsigned list (ul), and then just asked to change to paragraphs. And it worked immediately.

For code AI was never hallucinating to much, or at least it was easy to discover. But this time I didn't encounter any hallucinations at all.

The next task was to implement parsing index.html and displaying in my website.

Fist version I got was functionally correct but didn't look well. This is when I discovered that generated css was not entirely good to reuse. There were things bound to "body", that obviously couldn't work. This part was quite easy to improve, but was the first one where I found it's faster to do than guide Copilot.

C# code was "acceptable". I mean it worked, but it was one big method. I prefer things split to small logical methods. So for the start I at least moved logic to parse static site to a separate class.

As a next step I asked copilot to create a new controller and view to show blog post. It worked, but it again added one big method to a controller. I had to refactor it and move to a separate class. Where I refactored again to extract common methods - like parsing tags.

Quite interestingly, Copilot failed to find existing code to reuse and failed to write methods for repeating functions - like parsing tags. It was just repeating several lines of code finding beginning and ending of the tag. It doesn't even offer to use any library - I imagine there must be some. I would probably avoid using thirdparty library for something that simple, but I would understand an approach.

At this moment I looked at css and wanted to make it nicer. With scss I use at work, I got used to well organised nested classes, and asked copilot to organise css that way. Several times in a row it said that it understood me and generated the same three separate classes. It's already good that it didn't start hallucinating and implementing not-working stuff. I had to explicitly ask if it is possibly to get an answer that actually not. So Copilot still works in "pleasing the master" mode, we need to remember to formulate questions in a way that suggests questioning the question itself. Which I find quite a serious limitation. At least at this area human colleagues still have a big advantage.

Then I asked it to implement "Comments" in ASP.Net. It decided to implement simple action in controller and some JavaScript code. In general it was fine.

Then I went on asking to make tree of comments (with replies). Resulting html was a bit too complicated, but then I made it nicer half-manually.

Still, it implemented one field to add comment in the bottom. I would prefer that box for reply would be displayed right below the respective comment, but for that Copilot generated repetitive html. It didn't offer template for comment that can be reused where required, so I just didn't apply that portion and left it for later.

Next task was to mark new comments. Despite pretty working code, surprisingly, Copilot preferred Javascript to css. When asked, it started moving things to css, but some leftovers like adding a label "New" stayed in JavaScript. When I then asked in VisualStudio Code (other model) to style "private" post, it added text of a badge via css, which I find much cleaner. Which shows that models I used in VisualStudio may be just a bit weaker.

And the last task I wanted is to migrate existing comments from Live Journal. As agents (at least on my setup) can't access internet, I proceeded with copypasting and formatting all comments, so that C# code can then parse and insert. Copilot offered almost functional solution, but with couple of weaknesses. It failed to offer reasonable format, therefore generated quite overcomplicated code to parse it. It was reading line by line and either adding to "previous" comment or starting new.

I then suggested another format that can be simply split to comments, and these processed in cycle. But then had to explicitly ask to rewrite code in this way.

The code was quite "junior level" for me. Copilot created only one method to insert the comment to the database, all the rest went to one big method.

I wrote above "almost" - yes, here Copilot didn't get enough context to check the rest of my code. Therefore it didn't find correct name of connection string in my code, instead hallucinated almost correct one. Also it didn't see that it is EntityFramework connection string, so it simply didn't work. I had to explicitly mention it, and then it offered a correct solution.

Next problem was that connection string didn't have a password (which is correct for the code I commit to the public repository). Copilot immediately suggested to add password to the connection string. Which is for me quite an important indicator - the offered solution is completely functional, but has bug security issue. If someone wouldn't control the mmodel, this could end up in production.

Related to above, it doesn't follow the existing approaches, just does things it's way all the time. Again, maybe more powerful models will be able to analyse more context in future.

AI constantly overcomplicates things and I have to remind to make simpler. There will be multiple checks that input data is correct even when this input is strictly generated by the code this AI just wrote. At the same time external input is assumed to be exactly as in the prompt, without checks.

There are lots of obvious comments generated. This looks like a good plan that is then extended to implementation, but I would prefer to remove then redundant comments if the code is self explanatory. In general I don't like obvious comments, they just clutter the code (assuming code is self-explanatory), and can quickly get outdated with changes of the code. But I assume it may be a better style for AI to understand, and AI will probably consistently change all comments.

And speaking of debugging, with all the complications described above it's pretty difficult. I ended up slightly rewriting code first.

In general there are still areas that are easier to do manually. Sometimes asking AI to do these ends up more time consuming than doing it myself, problem is to notice it fast and stop wasting time.

Summary

AI copilot is becoming increasingly more powerful and is now a real help in software development. Yet it is still a tool, greatly amplifying ones strengths as well as weaknesses. Main skill to master now: how to pit the boundaries where to use and trust AI and where to keep control or even do things manually.




Comments

No comments yet.


Remember my name for future