What is the current (Apr. 2024) gold standard of running an LLM locally?

0
admin
Aug 24, 2024 03:55 PM 0 Answers Ask Question
Member Since Apr 2019
Subscribed Subscribe Not subscribe
Flag(0)

The current gold standard for running an LLM locally on a 3090 24GB GPU would be using Oobabooga coupled with Sillytavern. These tools are well-regarded for their ease of use and efficiency. For the latest models and advice, consulting the "best current local model" thread on the LocalLLaMA subreddit is recommended.

0 Subscribers
Submit Answer
Please login to submit answer.
0 Answers
Sort By:

Share: