Which AI coding support/assistance tool do you recommend?

Hi All

I didnt use any AI tool for coding yet, I am a little bit late to that party
Which tool (paid or free) do you recommend, and which one work best with clojure?

I mean tools like: Copilot, Cursor or replit … others

Which one hallucinate the least, since i keep reading that hallucination seem like the main drawback for those tools

2 Likes

I am currently using Copilot. I am not sure if the other GenAI could produce better, but I barely use them for generate the test case.

1 Like

Claude.ai can do it. But the code it outputs is usually 5x as long as you need it to be and messy.
I’ve saved time on using claude.ai a few times when I had a very specific thing to do, and for API exploration it can be pretty good, except for the cases where it hallucinates a library api. I think this is very dependent on the prompt.

I’ve also tried ChatGPT; would not reccomend.

1 Like

I tried different AI tools out of curiosity - most of them are meh, some a bit better.
Only one that is really worth mentioning is Cursor. Biggest feature for me is that it not only autocompletes, but also edits the code, which allows him to complete things like suggesting wrapping forms, fixing arguments, formatting code, or applying change in your code style to other places. Basically helping you deal with small annoying things. But code completion is also pretty decent, compared to most other solutions. It also has a chat, where you can ask questions about your code and it will make change suggestions which you can apply manually, but I don’t use it that much.
But I think it is more useful for other languages, though, not as much for Clojure.
And also it is and editor, not a plugin, and it is VS Code based. I personally prefer jet brains IDEs, but with jetbrains hotkeys plugin and some tweaks to theme, icons, and file tree indentation I kind of enjoying it for now.

1 Like

You might be interested in Colin Fleming’s talk from this year’s Conj: https://youtu.be/oNhqqiKuUmw

He’s built a process for translating code from one language to another by automating prompts to Claude and iterating.

2 Likes

I use Copilot in VS Code – as a paid seat ($19/month for Business).

The autocompletes are a bit wild sometimes… well, most of the time… but usually the first section of the suggestion is good enough to start with so I often accept word-by-word and then ignore the crazy stuff.

Copilot does a decent job of reviewing code and providing suggestions that improve readability. It can edit files – even multiple files – and with clearly-phrased direction does a pretty good job.

Overall tho’, I don’t think basic code gen is where these AI assistants shine. Use them as a pair programmer or a rubber duck debugger. Ask them about design tradeoffs. Use them to learn about library alternatives or architectural options or… well, pretty much anything except raw code gen.

As Ronin said, watch Colin’s talk – but also watch Wesley’s unsession and the several other talks about AI at Conj 2024. I had been using Copilot pretty much just for code gen before and I wasn’t impressed, but since Conj I’ve been using it differently and finding it much more effective as a programming assistant.

2 Likes

I’d recommend getting a subscription to claude.ai or chatpgt. Then you get access to the latest models at all times (you’ll almost always want to use the best one for coding, which currently are Claude 3.5 sonnet v2, or ChatGPT 4o).

You also get access to their agents and system prompt. So for example, they can run some python code, or do a search on your behalf.

The downside, you need to copy/paste.

But to me, I’ve found them most useful to research how to do something, how to design something, how to implement something, etc. And not so much for auto-complete. Think of them like a really knowledgeable coworker that’s always willing to answer your questions, or have a look at the code you are writing and give you some hints or ideas to make it work, that also happen to know everything you can find on google by heart.

They are also pretty good for pasting some code and asking them to explain it to you, what it does.

I’ve used them to generate my README.md file for a few of my open source libraries, I give them all my tests and library code, and guide them to producing a README with the section I want.

I also use them for my CSS, I give them my HTML, and ask for the CSS for it in some specific style.

It can help with trivial implementation of things, stuff that there’s probably already a function for somewhere online. I say trivial in the sense of common, as they’re really good at leet code for example, so if you need to implement some common algorithm, like say lev distance, they can generate the function for it easily, where I’d have to first read up on it, learn the algo, and it might take me a while to get it right. But then if you need it to do something uncommon, like a macro that behaves in some unique ways, that is not found commonly in many existing libs or application that it is trained on, it won’t really help you much.

P.S.: You do agree that they can do wtv with what you send them though, so you likely grant them copyright and what not.

1 Like

@didibus i looked up Claude sonnet, and seems like github-copilot now offer the option to select it as a model

i am new to AI agents, so still not sure how the Copilot integration differ from using Claude sonnet from the prompt, or how paying for Claude directly differ from using it via copilot

but i am not in a rush to make choice

I have not used copilot in a long time, but @seancorfield recently told me it had gotten a lot better.

The biggest difference will be the way you interact. Copilot is integrated with VS Code. One thing it’ll do is try to autocomplete as you type using the LLM, my experience with these is that it gets annoying, and it can interfere with the normal auto-complete which I prefer to use. But you can also send a piece of code you selected to it, or open a pane that lets you chat with it and ask questions about the code file you have open. It also does a vector search index over your project, so it can answer some questions about your project based on the vector search combined with the LLM.

Another difference is that, you are dealing with a middleman. Copilot resells the OpenAI or Anthropic models, wrapped in their own plugin and interface. Though for now, I think Microsoft doesn’t make money from Copilot, so you’re probably not paying a middleman tax for it.

Lastly, the difference will be whose system prompt and agents you are getting. The chat subscriptions directly from OpenAI and Anthropic have their own system prompts, along with their own agents.

A system prompt is a prompt that is always included in all queries to the LLM. A good system prompt can make a huge difference. When using LLMs, prompting is what makes the difference. A good prompt can result in a precise and accurate answer, and a bad one in a wrong answer.

An agent is when you connect the LLM to external APIs. So for example you can ask it a question, and the system prompt can direct it to make use of some external APIs before it answers, those could have it write code in a Python shell and give you the answer from the execution, or it could search the web for info that it gathers the answer from, etc.

But you won’t have a vector search over your code base.

Anyways, you could try one one month, and the other the month after, see which one you like better.

1 Like