LM studio is one of the easiest way to run LLM locally. It supports Windows, Mac and Linux.

To run NeoGPT with LM Studio.

  1. Download LM Studio from here.

  2. Select the model you want to download and run. For example, Mistral-7b.

  3. Go to Local Inference Server tab and click on Start Server.

  4. In the terminal, run the following command to use LM studio.

python main.py --model lmstudio/<model-name>

You can run any model from LM Studio with NeoGPT. For future runs, make sure to start the server first and then run the command above.