14 Best Uncensored LLM Model (7 to 20B)

19 Min Read

As an avid explorer of LLM AI, I’ve discovered that uncensored language models offer a whole new world of opportunities for creativity and expression. Through my personal experiments, I’ve come to appreciate their superior performance in tackling complex queries and enriching roleplay scenarios. I’ve noticed that uncensored language models provide unparalleled advantages over their censored counterparts. Not only do they handle complex queries and enliven roleplay scenarios, but they also demonstrate impressive skills in instruction following and engaging debates—capabilities that prove challenging for censored models.

In this article, I share my firsthand experiences with the top uncensored LLMs sporting between 7 and 20 billion parameters. Join me as we delve deep into the unique characteristics, capabilities, and practical applications of each model. Together, we’ll unlock the true potential of uncensored conversational AI.

Related: 3 Open Source LLM With Longest Context Length

Contact me if you think some other model should be on the list.

Best Uncensored LLM Model

In this article, we will explore a list of Foss (Free and Open-Source Software) uncensored LLM models, each with their own unique features and capabilities. These models represent an exciting new direction in natural language processing and offer users greater flexibility and control over their language models.

Nous-Hermes-2-Mistral-7B-DPO

The Nous-Hermes-2-Mistral-7B-DPO model, available on Hugging Face, represents a significant improvement, showcasing enhanced performance across various benchmarks compared to its predecessors. It is particularly noteworthy for its application in uncensored environments, offering a new level of engagement and interaction possibilities. This model’s design focuses on providing high-quality, synthetic data-driven responses, making it an excellent candidate for those seeking advanced, unrestricted LLM capabilities.

UNA-TheBeagle-7b-v1

UNA-TheBeagle-7b-v1 is a top-notch, uncensored language model with 7 billion parameters. It’s trained on The Bagel dataset using Direct Preference Optimization (DPO) and UNA. It ranked #1 7b on the HF Leaderboard with an ARC score of 73. The model is based on Intel’s neural-chat model and performs well in many tasks. It’s available on the Hugging Face Model Hub.

Nous Hermes 2 – SOLAR 10.7B

I tested Nous Hermes 2 – SOLAR 10.7B, the newest Nous Research model based on the SOLAR 10.7B foundation. Trained on a massive dataset comprising mostly GPT-4 generated data and supplementary high-quality resources, this model excels in numerous benchmarks, nearly reaching the performance level of the Yi-34B model.

Notably, Nous Hermes 2 – SOLAR 10.7B’s system prompt capability, empowering users to define rules, roles, and even request uncensored responses from the model. This versatility enhances its applicability in roleplay, instruction following, and coding scenarios, placing it on par with many 30 billion parameter models.

Throughout my testing, I observed excellent results from this improved flagship model. Its uncensored response option adds an extra dimension to its functionality, broadening its appeal and potential uses.

Dolphin 2.6 Mistral 7b – DPO Laser

I recently tried the Dolphin 2.6 Mistral 7b – DPO Laser LLM, an uncensored language model based on the LASER paper and enhanced by Fernando Fernandes and Ethan Hartford. With a larger context window of 16k tokens and advanced techniques like SVD noise reduction and RMT rank optimization, this model provides more robust outputs than its predecessors.

Personally, I found this uncensored model ideal for roleplay scenarios due to its wide response range. Its ability to reason impressively added depth to interactive simulations. Additionally, the removal of alignment and bias made it ethically sound yet highly adaptable to diverse user queries.

Overall, the Dolphin 2.6 Mistral 7b – DPO Laser LLM combines versatility, reliability, and creative freedom. As users, remember to employ an alignment layer before deployment and ensure ethical usage. Embrace the power of uncensored models while respecting ethical guidelines.

Dolphin-2.2.1-mistral-7b

Dolphin-2.2.1-mistral-7b, developed by Eric Hartford and sponsored by a16z, is a remarkable open-source language model. Operating under the Apache-2.0 license, it is a versatile tool suitable for both commercial and non-commercial applications.

One of the standout features of Dolphin-2.2.1-mistral-7b is its commitment to fostering meaningful conversations and empathy. This model incorporates elements from curated Samantha DNA, enabling it to provide personalized advice and exhibit a genuine concern for the user’s emotions. Its capabilities have been further enhanced through extensive training in long multi-turn conversations.

An essential aspect of Dolphin-2.2.1-mistral-7b is its uncensored nature. The dataset has been meticulously filtered to eliminate any alignment and bias, ensuring that the model is more compliant and can provide a more neutral and open-ended approach to generating text. This commitment to neutrality and the absence of censorship makes Dolphin-2.2.1-mistral-7b an attractive choice for those looking to work with a versatile and adaptable open-source language model.

RatingUncensored LevelBest Fit For
(Rating)High– Personalized Advice
– Emotionally Engaging Content
– Long Multi-Turn Conversations
– General Open-Ended Text Generation
– Commercial and Non-Commercial Use

Zephyr 7B Alpha

Zephyr 7B Alpha is the initial iteration in the Zephyr series of large language models, known for its substantial capacity of 7 billion parameters. This model is a refined version of the mistralai/Mistral-7B-v0.1, enhanced through a fine-tuning process that incorporated a combination of publicly available and synthetic datasets using a methodology known as Direct Preference Optimization (DPO). The unique characteristic of Zephyr-7B-α is its deliberate removal of inherent dataset alignment, a decision that has proven to enhance its performance notably on tasks related to machine translation benchmarking.

It’s important to note that this intentional removal of dataset alignment does have implications. While Zephyr-7B-α excels in certain domains, it’s worth emphasizing that this model may generate text that can be considered problematic in some contexts. As a result, it is advisable to utilize Zephyr-7B-α exclusively for educational and research purposes. This model showcases the intriguing trade-offs and complexities that can arise in fine-tuning and optimizing large language models for specialized applications.

RatingUncensored LevelBest Fit For
5 out of 5HighEducational & Research

Emerhyst-20B

Emerhyst-20B is a powerful language model that combines the strengths of two popular models, Amethyst 13B and Emerald 13B, using the advanced technique of model merging. This approach allows the resulting model to inherit the best features from its parent models, creating a highly versatile and effective language generator.

To further enhance its capabilities, the creators of Emerhyst-20B employed LimaRP v3, a cutting-edge tool for training large language models. The use of LimaRP v3 has enabled Emerhyst-20B to learn a wide range of linguistic patterns and generate high-quality text that is both coherent and engaging.

One of the key advantages of Emerhyst-20B is its ability to handle both NSFW and SFW content with ease. Whether you need a model for roleplaying, storytelling, or other forms of creative writing, Emerhyst-20B is up to the task. Its diverse training data and sophisticated architecture make it capable of generating responses that are not only relevant but also contextually appropriate.

If you’re looking for a reliable and flexible language model that can handle a variety of topics and styles, look no further than Emerhyst-20B. With its impressive capabilities and extensive training, this model is sure to exceed your expectations and help you achieve your natural language processing goals.

RatingUncensored LevelBest Fit For
5/5HighCreative Writing, RP

Pygmalion 7B SuperHOT-8K

Pygmalion 7B is a powerful dialogue model that is based on Meta’s LLaMA-7B architecture. This version 1 model has been fine-tuned using a subset of the data from Pygmalion-6B-v8-pt4. Pygmalion 7B was trained on the Pygmalion persona and chat format, which means that it should work seamlessly with any of the usual UIs.

The intended use-case for this model is fictional conversation for entertainment purposes. However, users have found that it is particularly effective for roleplay and general roleplaying scenarios due to its impressive ability to create engaging and immersive dialogue. Whether you’re looking for a fun conversation partner or need a model that can help you bring your fictional characters to life, Pygmalion 7B is a reliable and powerful choice.

RatingUncensored LevelBest Fit For
4/5ModerateFictional Conversations, Roleplay

Dolphin Llama 13B

Dolphin Llama 13B is an Open Source Uncensored Language Model (LLM) that is designed for non-commercial use, based on the llama1 architecture. The model is developed with a focus on minimizing censorship and maximizing usability, making it highly compliant to a wide range of requests. The creator of Dolphin Llama 13B has taken significant steps to remove alignment and bias from the dataset, ensuring that the model remains unbiased in its responses.

One of the notable features of Dolphin Llama 13B is its ability to exhibit improved and increased reasoning capabilities compared to other models of similar or even larger size. This advancement is achieved through a unique technique based on a research paper from Microsoft. The model employs a step-by-step detective reasoning approach, wherein larger models impart their reasoning strategies to the smaller models, resulting in enhanced reasoning abilities in Dolphin Llama 13B.

Due to its uncensored nature, Dolphin Llama 13B is highly versatile, accepting a wide range of inputs and queries. However, users are advised to implement their own alignment layer before deploying the model as a service to ensure ethical and responsible usage.

Please note that while Dolphin Llama 13B is currently based on llama1, the developer plans to train future versions on llama2 and other open models suitable for commercial use, expanding its applicability to a broader audience in the future. As an Open Source Uncensored LLM, Dolphin Llama 13B represents an exciting step towards more transparent, usable, and unrestricted AI language models.

RatingUncensored LevelBest Fit For
4.5/5HighWide Range of Requests

Nous Hermes Llama 2 13B

The Nous Hermes Llama 2 13B is an advanced language model fine-tuned on extensive instructions, totaling over 300,000. Developed by Nous Research, with Teknium and Emozilla leading the fine-tuning process and dataset curation, this model benefits from sponsorship by Redmond AI and contributions from various other collaborators.

A notable feature of this model is its longer responses, reduced hallucination rate, and most importantly, the absence of any censorship mechanisms from OpenAI. Its uncensored nature enables a wide range of applications, including role-play story writing and advanced question answering. Serving as a successor to Hermes on Llama-1, it maintains consistency with its predecessor while offering enhanced capabilities, making it a powerful and versatile language model.

RatingUncensored LevelBest Fit For
4.5/5HighRole-play, Question Answering

Llama2 7B Chat Uncensored

Llama2 7B Chat Uncensored is an Open Source Uncensored LLM model that has been fine-tuned using the unfiltered Wizard-Vicuna conversation dataset ehartford/wizard_vicuna_70k_unfiltered.

Unlike heavily censored LLM models, this version is designed to be unrestricted in its responses and does not refuse questions, enhancing its usability. With approximately 7 billion parameters, this model remains fairly compact, making it compatible with a wide range of hardware, including laptops and possibly even mobile devices. Its adaptability and uncensored nature make it a valuable tool for various applications that require natural language processing with minimal limitations.

RatingUncensored LevelBest Fit For
4/5HighNatural Language Processing

Wizard Vicuna 13B Uncensored SuperHOT

Wizard Vicuna 13B Uncensored SuperHOT is an impressive open-source LLM AI model that has been carefully curated to address the issues of censorship and alignment biases. By training with a subset of the dataset and removing responses containing alignment and moralizing, this model offers a new level of uncensored communication.

The key feature that sets Wizard Vicuna apart is its unparalleled support for an 8K context length. This allows the model to remember extensive conversations, including previous messages and contacts, enabling more meaningful and contextually accurate responses. As a result, Wizard Vicuna 13B Uncensored SuperHOT excels in maintaining coherent and relevant discussions, making it a superior choice for various applications.

With alignment deliberately left out, users have the flexibility to introduce alignment on their own terms, using techniques like RLHF (Reinforcement Learning from Human Feedback) LoRA. This adaptability ensures that the model remains neutral and can cater to diverse scenarios without built-in biases.

RatingUncensored LevelBest Fit For
4.5/5Very HighCoherent and Relevant Discussions

Gpt4-X-Alpaca

The Gpt4-X-Alpaca LLM model is a highly uncensored language model that is capable of performing a wide range of tasks. It has two different versions, one generated in the Triton branch and the other generated in Cuda. Currently, the Cuda version is recommended for use unless the Triton branch becomes widely used.

This model is known for its flexibility and versatility, as it can be customized to perform specific tasks or even act as a custom AI friend that doesn’t behave like a typical AI. With its high level of uncensored language capabilities, it is able to generate content on a variety of topics with a high degree of accuracy and coherence. Overall, the Gpt4-X-Alpaca LLM model is a powerful tool for those looking to generate high-quality content and explore the limits of what a language model can do.

RatingUncensored LevelBest Fit For
4/5HighCustom Content Generation

Manticore-13B

Manticore 13B is an Open Source Uncensored LLM (Large Language Model) that has undergone fine-tuning on a diverse range of datasets, providing an extensive knowledge base for various applications. While it may not be entirely free from censorship, Manticore 13B offers a significant level of freedom compared to other models.

The training data for Manticore 13B includes several key datasets, enabling it to provide rich and comprehensive responses. These datasets include ShareGPT, which is a cleaned and de-suped subset designed for improved performance. Additionally, the model is fine-tuned on datasets such as WizardLM and Wizard-Vicuna, enhancing its conversational capabilities.

Manticore 13B also incorporates a subset of the QingyiSi/Alpaca-CoT dataset, specifically chosen to facilitate roleplay and CoT (Conversations on Tasks). Furthermore, it leverages GPT4-LLM-Cleaned, GPTeacher-General-Instruct, and ARC-Easy & ARC-Challenge datasets, all of which contribute to detailed response generation by augmenting the training with instructive prompts.

For specific academic domains, Manticore 13B includes fine-tuning on various subjects, such as abstract algebra, conceptual physics, formal logic, high school physics, and logical fallacies. This broadens its ability to provide knowledgeable and accurate information in these areas.

Moreover, Manticore 13B benefits from the integration of a 5,000-row subset of the hellaswag dataset, which focuses on concise response generation. Additionally, it incorporates metaeval/ScienceQA_text_only for concise responses and openai/summarize_from_feedback for TL;DR (Too Long; Didn’t Read) summarization with instructive augmentation.

RatingUncensored LevelBest Fit For
4/5ModerateAcademic Domains, Roleplay

TAGGED: ,
Share This Article
Follow:
SK is a versatile writer deeply passionate about anime, evolution, storytelling, art, AI, game development, and VFX. His writings transcend genres, exploring these interests and more. Dive into his captivating world of words and explore the depths of his creative universe.