I’ve been experimenting with several models locally and the variability is intriguing. In some cases, efficacy seems to be inversely proportional to the size of the model. Ironically, for coding tasks, Gemma seems to provide more meta info than Llama :-)
I’ve been experimenting with several models locally and the variability is intriguing. In some cases, efficacy seems to be inversely proportional to the size of the model. Ironically, for coding tasks, Gemma seems to provide more meta info than Llama :-)
But have you tried Racoon or Gazelle? What about Iguana? I hear Interstellar is great for python…