2 Comments
User's avatar
Joey A's avatar

I’ve been experimenting with several models locally and the variability is intriguing. In some cases, efficacy seems to be inversely proportional to the size of the model. Ironically, for coding tasks, Gemma seems to provide more meta info than Llama :-)

Expand full comment
Dr. Jake Tuber's avatar

But have you tried Racoon or Gazelle? What about Iguana? I hear Interstellar is great for python…

Expand full comment