Discussion about this post

User's avatar
Joey A's avatar

I’ve been experimenting with several models locally and the variability is intriguing. In some cases, efficacy seems to be inversely proportional to the size of the model. Ironically, for coding tasks, Gemma seems to provide more meta info than Llama :-)

Expand full comment
1 more comment...

No posts