To install models manually from HuggingFace, there are some steps that you should follow.
As an example, if we wanted to download Pygmalion-6b, we would put in the following repository link into Github Desktop:
https://huggingface.co/PygmalionAI/pygmalion-6b
For local path, set it to the following folders, depending on your backend system below.
[KoboldAI Folder]/models
[Oobabooga Folder]/text-generation-webui/models
4bit-
in the file name. If such a file does not exist, rename the safetensors file you see to be 4bit
.If you see a
Xg
in the filename, incorporate that to the filename itself (i.e. if the model has 128g in the filename, make the new filename be4bit-128g
instead.
git
.For local path, set it to the following folders, depending on your backend system below.
[KoboldAI Folder]/models
[Oobabooga Folder]/text-generation-webui/models
git clone <repo link>
Replace
<repo link>
with the HuggingFace repository link. As an example, if we wanted to download Pygmalion-6b, the command should appear as such
git clone https://huggingface.co/PygmalionAI/pygmalion-6b
GGML models are more trickier compared to base and GPTQ models. To download them, follow these steps.
For this tutorial, we will be using this GGML repository by concedo.
Some repositories may have multiple versions of
.bin
files to download. In this case, download the the version that hasggml
in it's name. You may also noticed there are differentqX_X
andf16
. We recommend downloading a model betweenq4_0
andq5_1
for GGML models.
.bin
file you downloaded to where KoboldCPP is saved in or in [Oobabooga Folder]/text-generation-webui/models
for Oobabooga.You will need a secondary device to get the link to the model and install
wget
on your system if it isn't installed. Make sure your terminal instance is in the location where KoboldCPP is saved in or in[Oobabooga Folder]/text-generation-webui/models
for Oobabooga.
Copy Link
.wget <link>
Replace
<link>
with the link you copied over.