Which AI coding support/assistance tool do you recommend?

Hi All

I didnt use any AI tool for coding yet, I am a little bit late to that party
Which tool (paid or free) do you recommend, and which one work best with clojure?

I mean tools like: Copilot, Cursor or replit … others

Which one hallucinate the least, since i keep reading that hallucination seem like the main drawback for those tools

2 Likes

I am currently using Copilot. I am not sure if the other GenAI could produce better, but I barely use them for generate the test case.

1 Like

Claude.ai can do it. But the code it outputs is usually 5x as long as you need it to be and messy.
I’ve saved time on using claude.ai a few times when I had a very specific thing to do, and for API exploration it can be pretty good, except for the cases where it hallucinates a library api. I think this is very dependent on the prompt.

I’ve also tried ChatGPT; would not reccomend.

1 Like

I tried different AI tools out of curiosity - most of them are meh, some a bit better.
Only one that is really worth mentioning is Cursor. Biggest feature for me is that it not only autocompletes, but also edits the code, which allows him to complete things like suggesting wrapping forms, fixing arguments, formatting code, or applying change in your code style to other places. Basically helping you deal with small annoying things. But code completion is also pretty decent, compared to most other solutions. It also has a chat, where you can ask questions about your code and it will make change suggestions which you can apply manually, but I don’t use it that much.
But I think it is more useful for other languages, though, not as much for Clojure.
And also it is and editor, not a plugin, and it is VS Code based. I personally prefer jet brains IDEs, but with jetbrains hotkeys plugin and some tweaks to theme, icons, and file tree indentation I kind of enjoying it for now.

1 Like

You might be interested in Colin Fleming’s talk from this year’s Conj: https://youtu.be/oNhqqiKuUmw

He’s built a process for translating code from one language to another by automating prompts to Claude and iterating.

2 Likes

I use Copilot in VS Code – as a paid seat ($19/month for Business).

The autocompletes are a bit wild sometimes… well, most of the time… but usually the first section of the suggestion is good enough to start with so I often accept word-by-word and then ignore the crazy stuff.

Copilot does a decent job of reviewing code and providing suggestions that improve readability. It can edit files – even multiple files – and with clearly-phrased direction does a pretty good job.

Overall tho’, I don’t think basic code gen is where these AI assistants shine. Use them as a pair programmer or a rubber duck debugger. Ask them about design tradeoffs. Use them to learn about library alternatives or architectural options or… well, pretty much anything except raw code gen.

As Ronin said, watch Colin’s talk – but also watch Wesley’s unsession and the several other talks about AI at Conj 2024. I had been using Copilot pretty much just for code gen before and I wasn’t impressed, but since Conj I’ve been using it differently and finding it much more effective as a programming assistant.

3 Likes

I’d recommend getting a subscription to claude.ai or chatpgt. Then you get access to the latest models at all times (you’ll almost always want to use the best one for coding, which currently are Claude 3.5 sonnet v2, or ChatGPT 4o).

You also get access to their agents and system prompt. So for example, they can run some python code, or do a search on your behalf.

The downside, you need to copy/paste.

But to me, I’ve found them most useful to research how to do something, how to design something, how to implement something, etc. And not so much for auto-complete. Think of them like a really knowledgeable coworker that’s always willing to answer your questions, or have a look at the code you are writing and give you some hints or ideas to make it work, that also happen to know everything you can find on google by heart.

They are also pretty good for pasting some code and asking them to explain it to you, what it does.

I’ve used them to generate my README.md file for a few of my open source libraries, I give them all my tests and library code, and guide them to producing a README with the section I want.

I also use them for my CSS, I give them my HTML, and ask for the CSS for it in some specific style.

It can help with trivial implementation of things, stuff that there’s probably already a function for somewhere online. I say trivial in the sense of common, as they’re really good at leet code for example, so if you need to implement some common algorithm, like say lev distance, they can generate the function for it easily, where I’d have to first read up on it, learn the algo, and it might take me a while to get it right. But then if you need it to do something uncommon, like a macro that behaves in some unique ways, that is not found commonly in many existing libs or application that it is trained on, it won’t really help you much.

P.S.: You do agree that they can do wtv with what you send them though, so you likely grant them copyright and what not.

1 Like

@didibus i looked up Claude sonnet, and seems like github-copilot now offer the option to select it as a model

i am new to AI agents, so still not sure how the Copilot integration differ from using Claude sonnet from the prompt, or how paying for Claude directly differ from using it via copilot

but i am not in a rush to make choice

I have not used copilot in a long time, but @seancorfield recently told me it had gotten a lot better.

The biggest difference will be the way you interact. Copilot is integrated with VS Code. One thing it’ll do is try to autocomplete as you type using the LLM, my experience with these is that it gets annoying, and it can interfere with the normal auto-complete which I prefer to use. But you can also send a piece of code you selected to it, or open a pane that lets you chat with it and ask questions about the code file you have open. It also does a vector search index over your project, so it can answer some questions about your project based on the vector search combined with the LLM.

Another difference is that, you are dealing with a middleman. Copilot resells the OpenAI or Anthropic models, wrapped in their own plugin and interface. Though for now, I think Microsoft doesn’t make money from Copilot, so you’re probably not paying a middleman tax for it.

Lastly, the difference will be whose system prompt and agents you are getting. The chat subscriptions directly from OpenAI and Anthropic have their own system prompts, along with their own agents.

A system prompt is a prompt that is always included in all queries to the LLM. A good system prompt can make a huge difference. When using LLMs, prompting is what makes the difference. A good prompt can result in a precise and accurate answer, and a bad one in a wrong answer.

An agent is when you connect the LLM to external APIs. So for example you can ask it a question, and the system prompt can direct it to make use of some external APIs before it answers, those could have it write code in a Python shell and give you the answer from the execution, or it could search the web for info that it gathers the answer from, etc.

But you won’t have a vector search over your code base.

Anyways, you could try one one month, and the other the month after, see which one you like better.

1 Like

I’ll add a vote for Perplexity, who distinguish themselves with a commitment to “precision”. Unlike other services they rely on search to augment their results and can oftentimes provide citations relevant to the question being asked. I have found the service to be notably better for some specific types of technical queries, particularly around usage of specific APIs or other things like that. I am still early on in playing around with it and deciding if/how to integrate it into my workflow but so far I can say it has been useful and is worth some investigation.

The Claude and ChatGPT subscription also have this functionality FYI.

By the way, I personally dislike that aspect. Using LLMs to help you search is alright, in that it’ll read through the top search results and try to find your answer. But sometimes it’s not what you want, what you want is for it to leverage it’s large dataset over which it catalyzed the probabilities and you want it to give you the most likely answer, not the answer that some website has.

With the ChatGPT subscription, you can turn on or off use of search as you chat with it based on if you want a factual result, like who won last Friday’s football game (where you want it to google search for you and return the result), or if you want to leverage the LLM for prediction, like what sport team is most likely to have been the greatest of all time.

I have used, with Clojure, ClojureScript (mainly Electric Clojure) mainly Cursor + Claude.ai (3.5). The results are very good. I have tried a lot of other tools (Cody, aider) but I found out that the best LLM for coding is Claude.ai, and the results are consistently better then other LLMs such as ChatGPT 4o. I also try to use the latest Mistral for code (quite impressive results) and I chat a lot with ChatGPT as a sparring thinking partner.

Because of the nature of my job, I also generated code in Elixir, Scala and Java with Cursor+Claude and it was very good, also.

I usually try to make prompts that attempt to do one single thing, and I provide the source code necessary (quite easy with Cursor). I don’t use the autocomplete features as I find it to be mostly noise, but I use the Chat feature and the Composer. I am also a TDDist and all the code has tests, a lot are generated.

I have a Cursor subscription (20$/month)

1 Like

I would like to include a sample prompt (I am playing with it now), for Cursor Composer which generates quite a few files. It;s a Scala one, but it could give others an idea on how to prompt Cursor Composer:

## What I have done (example, in 5 steps)

I developed a feature called Login using Outside in TDD in my Scala project, for a domain entity called User and defined as @ User.scala .
*The path to my GET endpoint is: *

### Step 1: Acceptance test

First I made an acceptance test @ LoginAcceptanceTest.scala (module 03-infra).
It uses a @UserGenerator.scala to generate new Users (module 03-infra).
It also uses the edpClient to make the login http call to the server.
It @LoginRequest.scala and a @ LoginResponse.scala (module http-client).

*In order for it to fail for the right reason, I had to create the endpoint: *
and return some dummy data so the test fails, because the result isn’t the same, but the server works.

### Step 2: TDD using a unit test the controller

  • I made a unit test for the endpoint @ AuthControllerTest.scala (module 03-infra) then the code @ AuthController.scala (module 03-infra) which extends @ AuthEndpoint.scala (in module http-client). Also I added the AuthController to the Env@ Main.scala (module http-client).*

*### Step 3: TDD using a unit test the use case *

*I made Unit test @ LoginTest.scala and the @ Login.scala use case (both module 02-app). *

### Step 4: TDD using an integration test the mongo repository functions.

I created an integration test for @ LoginRepositoryIT.scala (module 03-infra) for @ LoginMongoRepository.scala (module 03-infra) which overrides the trait @ LoginRepository.scala (module 02-app).

### Step 5: Acceptance test passes

When all the other tests pass, the acceptance test passes.

## What I need now?

I now want to create a new feature, called FindDocuments, for a domain entity called Document to be defined as:

```
case class Document(

  • override val id: UUID,*
  • title: String,*
  • dateAdded: Instant,*
  • size: Int,*
  • unread: Boolean = true*
    ) extends Entity[UUID] derives BaseCirce
    ```

So we’ll need, a new controller DocumentsController, which extends a new endpoint GET “api/1/documents”, that receives a DocumentsRequest and returns a DocumentsResponse, which will contain a list of DocumentDTO. It will use a DocumentGenerator to write the acceptance test: FindDocumentsAcceptanceTest

  • I will also need the use case FindDocuments and the DocumentsMongoRepository that extends DocumentsRepository. *
  • Then I will need the DocumentGenerator, for the acceptance test: FindDocumentsAcceptanceTest, then the unit tests FindDocumentsControllerTest and FindDocumentsTest and the interation test for DocumentsMongoRepository*

I’m curious, do you have the AI generate a test for the code you have yet to write? Like, you’d say, write me a test for a unit with a method of “Foo”, that given X returns Y ? And assert that if X is less than zero, it throws, if X is …

Then it gives you the tests. And then you go and implement the code to pass the tests?

Is that what you do?

Edit: Oh I see your example prompt. So does this prompt go and generate both tests and the implementation? Or are you describing tests that you manually wrote?

I use the one shot prompting. Basically I give an example of an entire feature implemented, with all the code and tests (that’s why you see the @ …scala, which will include the file), then ask for the next one to be generated in the same manner, with all the code and all the tests.

1 Like

I will make a video, when pressure at my work decreases, to show anyone how I (we ) generate Clojure code using Cursor Chat and Cursor Composer using Claude.ai

2 Likes

@danbunea thank you, this would be awesome :slight_smile:

I did one in a hurry, but it expresses the idea: you make an example by hand then provide it in the prompt and ask for something, in this case 3 more migrations for Datomic:

1 Like

I promise I will do a better one of an entire feature, with all the tests too, in the future.

1 Like