Search/
Skip to content
/
OpenRouterOpenRouter
© 2026 OpenRouter, Inc

Product

  • Chat
  • Rankings
  • Models
  • Providers
  • Pricing
  • Enterprise

Company

  • About
  • Announcements
  • CareersHiring
  • Partners
  • Privacy
  • Terms of Service
  • Support
  • State of AI

Developer

  • Documentation
  • API Reference
  • SDK
  • Status

Connect

  • Discord
  • GitHub
  • LinkedIn
  • X
  • YouTube
Favicon for liquid

Liquid

Browse models from Liquid

7 models

Tokens processed on OpenRouter

  • LiquidAI: LFM2.5-1.2B-Thinking (free)LFM2.5-1.2B-Thinking (free)Free variant
    9.55M tokens

    LFM2.5-1.2B-Thinking is a lightweight reasoning-focused model optimized for agentic tasks, data extraction, and RAG—while still running comfortably on edge devices. It supports long context (up to 32K tokens) and is designed to provide higher-quality “thinking” responses in a small 1.2B model.

by liquid33K context$0/M input tokens$0/M output tokens
  • LiquidAI: LFM2.5-1.2B-Instruct (free)LFM2.5-1.2B-Instruct (free)Free variant
    4.76M tokens

    LFM2.5-1.2B-Instruct is a compact, high-performance instruction-tuned model built for fast on-device AI. It delivers strong chat quality in a 1.2B parameter footprint, with efficient edge inference and broad runtime support.

    by liquid33K context$0/M input tokens$0/M output tokens
  • LiquidAI: LFM2-8B-A1BLFM2-8B-A1B
    2.07M tokens

    LFM2-8B-A1B is an efficient on-device Mixture-of-Experts (MoE) model from Liquid AI’s LFM2 family, built for fast, high-quality inference on edge hardware. It uses 8.3B total parameters with only ~1.5B active per token, delivering strong performance while keeping compute and memory usage low—making it ideal for phones, tablets, and laptops.

    by liquid33K context$0.01/M input tokens$0.02/M output tokens
  • LiquidAI: LFM2-2.6BLFM2-2.6B
    1.89M tokens

    LFM2 is a new generation of hybrid models developed by Liquid AI, specifically designed for edge AI and on-device deployment. It sets a new standard in terms of quality, speed, and memory efficiency.

    by liquid33K context$0.01/M input tokens$0.02/M output tokens
  • Liquid: LFM 7BLFM 7B

    LFM-7B, a new best-in-class language model. LFM-7B is designed for exceptional chat capabilities, including languages like Arabic and Japanese. Powered by the Liquid Foundation Model (LFM) architecture, it exhibits unique features like low memory footprint and fast inference speed. LFM-7B is the world’s best-in-class multilingual language model in English, Arabic, and Japanese. See the launch announcement for benchmarks and more info.

    by liquid33K context
  • Liquid: LFM 3BLFM 3B

    Liquid's LFM 3B delivers incredible performance for its size. It positions itself as first place among 3B parameter transformers, hybrids, and RNN models It is also on par with Phi-3.5-mini on multiple benchmarks, while being 18.4% smaller. LFM-3B is the ideal choice for mobile and other edge text-based applications. See the launch announcement for benchmarks and more info.

    by liquid33K context
  • Liquid: LFM 40B MoELFM 40B MoE

    Liquid's 40.3B Mixture of Experts (MoE) model. Liquid Foundation Models (LFMs) are large neural networks built with computational units rooted in dynamic systems. LFMs are general-purpose AI models that can be used to model any kind of sequential data, including video, audio, text, time series, and signals. See the launch announcement for benchmarks and more info.

    by liquid33K context