Tired of chatbots that forget what you just said? Open-source LLMs (Large Language Models) are revolutionizing the way we interact with AI, offering unprecedented context awareness and extended conversational memory. But what if you could unlock a model that remembers even longer? Look no further! This article dives into the fascinating world of open-source LLMs boasting the longest context lengths available.
Local LLM With Longest Context Length
internlm2_5-7b-chat-1m
InternLM2 has released a new 7B parameter model called internlm2_5-7b-chat-1m. They claim it has some interesting capabilities, though as always, we should take these with a grain of salt until independently verified.
Apparently, it’s pretty good at math reasoning, supposedly outperforming some other well-known models. They’ve also managed to bump up the context window to a massive 1 million tokens.
They’re touting its ability to use tools more effectively, mentioning something about gathering info from over 100 web pages. Sounds potentially useful, but we’ll have to see how it actually performs in practice.
Llama-3-8B-Instruct-Gradient-4194k
The Llama-3-8B-Instruct-Gradient-4194k is an impressive upgrade of the Llama-3 8B model. It boosts the context length from 8k to a whopping 4194k tokens. Created by Gradient and powered by Crusoe Energy, this model shows how top-notch language models can handle longer context with just a bit of extra training.
This model is great for tasks that need deep understanding and generation of long texts, like analyzing legal documents, summarizing research papers, or creating detailed chatbots.
Llama-3 8B Gradient Instruct:Â Up to 4 million tokens
Llama-3 8B Gradient Instruct 1048k
The Llama-3 8B Gradient Instruct 1048k model is a powerful language model that builds upon the capabilities of the original Llama-3 8B. This enhanced model, developed by Gradient and sponsored by Crusoe Energy, boasts an impressive context length of over 1040K, a significant upgrade from the base model’s 8K limit. This extended context window allows the model to understand and generate responses that consider a far greater amount of preceding text, resulting in more coherent and contextually appropriate outputs.
What sets this model apart is its efficient training process. Despite the substantial increase in context length, the model only required additional training on 830M tokens, bringing the total training tokens to 1.4B, which is a minuscule fraction (<0.01%) of the original Llama-3’s pre-training data. This demonstrates the model’s ability to quickly adapt to longer contexts with minimal additional training data, making it a highly efficient and cost-effective solution.
With its SOTA performance and extended context length, the Llama-3 8B Gradient Instruct 1048k model is an attractive option for developers and researchers seeking to work with large amounts of text data.
Llama-3 8B Gradient Instruct:Â Up to 1 million tokens
Phi-3-Mini-128K-Instruct
The Phi-3-Mini-128K-Instruct model is a mighty assistant packed into a lightweight package. With 3.8 billion parameters, this model punches above its weight, offering state-of-the-art performance that rivals models with far more parameters.
Trained on the diverse and carefully curated Phi-3 datasets, including synthetic data and filtered web content, the Phi-3-Mini-128K-Instruct excels in common sense, language understanding, mathematics, coding, and logical reasoning. It boasts a context length of 128K tokens, allowing it to maintain long-term dependencies and understand nuanced contexts.
In benchmarks, the model impresses with its robust performance, outshining models with larger parameter counts. Its smaller size also means faster inference times and reduced memory requirements, making it an efficient and cost-effective choice for a variety of applications.
Phi-3-Mini-128K-Instruct:Â Up to 128K tokens
LWM-Text-1M-Chat
The LWM-Text-1M-Chat model is a part of a fully open-sourced family of 7B parameter models capable of processing long text documents and videos of over 1M tokens. This model is designed to understand both human textual knowledge and the physical world, enabling broader AI capabilities for assisting humans.
The model utilizes the RingAttention technique to scalably train on long sequences and gradually increases the context size from 4K to 1M tokens. It also uses masked sequence packing for mixing different sequence lengths, loss weighting to balance language and vision, and a model-generated QA dataset for long sequence chat.
- LWM-Text-1M-Chat:Â Up to 1 million tokens
dolphin-2.6-mistral-7b-dpo-laser
Dolphin 2.6 Mistral 7b – DPO Laser is an open-source language model developed by Cognitive Computations. This model is based on Mistral-7b and has a context length of 16k. It’s a special release of Dolphin-DPO based on the LASER paper and implementation. The model has been trained using a noise reduction technique based on SVD decomposition. This model has achieved higher scores than its predecessors, Dolphin 2.6 and Dolphin 2.6-DPO, and theoretically, it should have more robust outputs.
The model is uncensored and the dataset has been filtered to remove alignment and bias, making the model more compliant. However, it’s advised to implement your own alignment layer before exposing the model as a service.
- dolphin-2.6-mistral-7b-dpo-laser:Â 16,000 tokens
OpenChat-3.5-0106-128k
OpenChat-3.5-0106-128k is an open-source language model developed by the OpenChat team and extended by CallComply. This model is a part of the OpenChat series and has an impressive context length of 128k. It’s a special release that outperforms ChatGPT (March) and Grok-1. The model has achieved a 15-point improvement in Coding over OpenChat-3.5.
The model introduces new features such as 2 Modes: Coding + Generalist, Mathematical Reasoning, and experimental support for Evaluator and Feedback capabilities.
- OpenChat-3.5-0106-128k:Â 128,000 tokens