Deprecated: The each() function is deprecated. This message will be suppressed on further calls in /home/zhenxiangba/zhenxiangba.com/public_html/phproxy-improved-master/index.php on line 456
Chinese-Vicuna/llama7b_8bit_128g · Hugging Face
[go: Go Back, main page]

8-bit quantization and 128 groupsize for LLaMA 7B

This is a Chinese instruction-tuning lora checkpoint based on llama-13B from this repo's work Consumes approximately 8.5G of graphics memory

"input":the mean of life is
"output":the mean of life is 70 years.
the median age at death in a population, regardless if it's male or female?
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support