Deprecated: The each() function is deprecated. This message will be suppressed on further calls in /home/zhenxiangba/zhenxiangba.com/public_html/phproxy-improved-master/index.php on line 456
Novita AI - Poe
[go: Go Back, main page]

Profile

Novita AI
Novita AI
@novitaai

AI Cloud for Everyone, Everywhere

Bot image for Gemma-4-26B-A4B
Gemma-4-26B-A4B
Gemma 4 26B A4B is built for developers who need scalable performance without sacrificing core capabilities.Crucially, it retains the massive 256K-token context window of the 31B model, making it highly competitive for long-context RAG and processing extensive, image-rich document datasets. It fully supports the series' core innovations: native Thinking mode for advanced logic, Interleaved Multimodal Input for dynamic text-image workflows, and flawless document/UI parsing. Equipped with native Function Calling and robust coding proficiencies, the 26B A4B is the ideal, cost-effective engine for powering real-world agentic workflows, visual automation, and global applications across its 140+ pre-trained languages. File Support: Text, Markdown, Image and PDF files Context window: 262k tokens
  1. OFFICIAL
  2. NEW
Bot image for Gemma-4-31B-N
Gemma-4-31B-N
Gemma 4 31B is engineered to tackle the most demanding enterprise workloads and complex reasoning tasks. With an expansive 256K-token context window, the 31B model can effortlessly ingest entire codebases, and massive sets of images in a single prompt. It boasts state-of-the-art vision-language capabilities, allowing developers to freely interleave text and images. It excels at parsing UI screens, comprehending complex charts, and executing multilingual OCR or handwriting recognition. Combined with native structured Function Calling, robust code generation, and out-of-the-box fluency in 35+ languages, Gemma 4 31B is the ultimate foundation for building sophisticated, autonomous AI agents and heavy-duty multimodal analysis pipelines. File Support: Text, Markdown, Image and PDF files Context window: 262k tokens
  1. OFFICIAL
  2. NEW
Bot image for MiniMax-Speech-2.8
MiniMax-Speech-2.8
MiniMax Speech 2.8 is a premium text-to-speech model delivering studio-quality audio with enhanced clarity and naturalness. With support for multiple voice presets, emotional tones, and fine-grained audio controls, it produces broadcast-ready speech synthesis for professional applications.
  1. OFFICIAL
Bot image for MiMo-V2-Flash
MiMo-V2-Flash
Xiaomi MiMo-V2-Flash is a proprietary MoE model developed by Xiaomi, designed for extreme inference efficiency with 309B total parameters (15B active). By incorporating an innovative Hybrid attention architecture and multi-layer MTP inference acceleration, it ranks among the top 2 global open-source models across multiple Agent benchmarks. Its coding capabilities surpass all open-source models and rival the industry-leading closed-source model, Claude 4.5 Sonnet—yet at only 2.5% of the inference cost and with 2x the generation speed, successfully pushing the limits of both model performance and efficiency. Context window: 262k tokens This bot supports optional parameters for additional customization.
  1. OFFICIAL
Bot image for Minimax-M2.7
Minimax-M2.7
MiniMax M2.7 is an all-around evolved, versatile open-source large language model that seamlessly blends hardcore engineering productivity with high-EQ, human-like interaction capabilities. In real-world software engineering, M2.7 excels by independently driving end-to-end project delivery while efficiently handling advanced tasks such as log analysis, bug troubleshooting, code security, and machine learning. In the professional workspace, it boasts the highest open-source GDPval-AA score (1495 ELO). It delivers high-fidelity, complex editing and multi-turn revisions across the Office suite (Excel, PPT, Word), elevating task execution to industry-leading standards. Built for complex environment interactions, M2.7 maintains an impressive 97% skill-following rate even with complex, long-context tool calls (>2000 tokens). Beyond its robust productivity, M2.7 breaks the "cold tool" stereotype of traditional models. With exceptional identity retention and high emotional intelligence (EQ), it not only empowers enterprise productivity, but also opens up more room for product innovation. Context window: 205k tokens This bot supports optional parameters for additional customization.
  1. OFFICIAL
Bot image for Qwen3.5-397B-A17B
Qwen3.5-397B-A17B
The Qwen3.5 series 397B-A17B native vision-language model is based on a hybrid architecture design that integrates linear attention mechanisms with sparse Mixture-of-Experts (MoE), achieving higher inference efficiency. Across a variety of tasks—including language understanding, logical reasoning, code generation, agentic tasks, image understanding, video understanding, and graphical user interface (GUI) interaction—it demonstrates exceptional performance comparable to current top-tier frontier models. Possessing robust code generation and agentic capabilities, it exhibits strong generalization across various agent scenarios. File Support: Text, Markdown, Image, Video and PDF files Context window: 262k tokens Optional parameters: Enable thinking about the response before giving a final answer: toggle it `on`, otherwise it is `off` by default. Set temperature to control randomness in the response: Set number from 1 to 2. This is set to `0.7` by default. Lower values make the output more focused and deterministic. Set max output tokens: Set number from 1 to 64000. This is set to 64000 by default.
  1. OFFICIAL
Bot image for Minimax-M2.5
Minimax-M2.5
MiniMax-M2.5 is a SOTA large language model designed for real-world productivity. Trained in a diverse range of complex real-world digital working environments, M2.5 builds upon the coding expertise of M2.1 to extend into general office work, reaching fluency in generating and operating Word, Excel, and Powerpoint files, context switching between diverse software environments, and working across different agent and human teams. Scoring 80.2% on SWE-Bench Verified, 51.3% on Multi-SWE-Bench, and 76.3% on BrowseComp, M2.5 is also more token efficient than previous generations, having been trained to optimize its actions and output through planning. Context window: 205k tokens This bot supports optional parameters for additional customization.
  1. OFFICIAL
Bot image for GLM-5
GLM-5
GLM-5 is an open-source foundation model engineered for complex system engineering and long-horizon Agent tasks, delivering reliable productivity for top-tier programmers. Transcending the boundary from "writing code" to "building systems," it moves beyond traditional snippet generation to offer senior-architect-level planning and execution capabilities. By rejecting the "frontend-heavy, logic-light" approach, GLM-5 demonstrates exceptional reasoning and self-healing abilities in backend refactoring, complex algorithm implementation, and deep debugging—autonomously analyzing logs and iteratively fixing persistent bugs until the system runs. As the first open-source model featuring Opus-class style and system engineering depth, GLM-5 provides extreme logic density alongside the freedom of local deployment and high cost-effectiveness, making it the ideal choice for large-scale backend development and automated Agent construction. Context window: 205k tokens This bot supports optional parameters for additional customization.
  1. OFFICIAL
Bot image for Qwen3-Coder-Next
Qwen3-Coder-Next
Qwen3-Coder-Next is an open-weight language model specifically engineered for coding agents and local development environments. This highly efficient model delivers exceptional performance with only 3B activated parameters out of 80B total parameters, achieving results comparable to models with 10-20x more active parameters while maintaining remarkable cost-effectiveness for agent deployment. Through its sophisticated training methodology, Qwen3-Coder-Next excels in advanced agentic capabilities including long-horizon reasoning, complex tool usage, and robust recovery from execution failures, ensuring reliable performance across dynamic coding tasks. The model's versatility is further enhanced by its 256k context length and adaptability to various scaffold templates, enabling seamless integration with diverse CLI/IDE platforms such as Claude Code, Qwen Code, Qoder, Kilo, Trae, and Cline, making it an ideal solution for comprehensive development environments. Optional parameters: Set temperature to control randomness in the response: Set number from 1 to 2. This is set to 0.7 by default. Lower values make the output more focused and deterministic. Set max output tokens: Set number from 1 to 65536. This is set to 65536 by default.
  1. OFFICIAL
Bot image for Kimi-K2.5
Kimi-K2.5
Kimi K2.5 is the latest flagship iteration of Moonshot AI's large language model series, representing a significant leap in multimodal and agentic capabilities. It features a native multimodal architecture supporting both visual and text inputs, alongside versatile thinking and non-thinking modes. This model maintains the substantial 256k token context window found in the K2 series but achieves new open-source state-of-the-art (SoTA) performance across general intelligence, coding, and visual understanding benchmarks. Kimi K2.5 delivers a breakthrough in frontend development, enabling the generation of fully functional, aesthetically polished interactive interfaces with complex dynamic layouts directly from natural language. Optimized for complex problem-solving, it excels in multi-step tool invocation, logical reasoning, and full-stack code synthesis. Optional parameters: Enable thinking about the response before giving a final answer: toggle it `on`, otherwise it is `off` by default. Set temperature to control randomness in the response: Set number from 1 to 2. This is set to 0.7 by default. Lower values make the output more focused and deterministic. Set max output tokens: Set number from 1 to 262144. This is set to 262144 by default.
  1. OFFICIAL