Deprecated: The each() function is deprecated. This message will be suppressed on further calls in /home/zhenxiangba/zhenxiangba.com/public_html/phproxy-improved-master/index.php on line 456
probabl-ai/ScikitLLM-Model-exl2 · Hugging Face
[go: Go Back, main page]

YAML Metadata Warning:empty or missing yaml metadata in repo card

Check out the documentation for more information.

ScikitLLM is an LLM finetuned on writing references and code for the Scikit-Learn documentation.

Features of ScikitLLM includes:

  • Support for RAG (three chunks)
  • Sources and quotations using a modified version of the wiki syntax ("")
  • Code samples and examples based on the code quoted in the chunks.
  • Expanded knowledge/familiarity with the Scikit-Learn concepts and documentation.

Training

ScikitLLM is based on Mistral-OpenHermes 7B, a pre-existing finetune version of Mistral 7B. OpenHermes already include many desired capacities for the end use, including instruction tuning, source analysis, and native support for the chatML syntax.

As a fine-tune of a fine-tune, ScikitLLM has been trained with a lower learning rate than is commonly used in fine-tuning projects.

Downloads last month
8
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support