De-censored, tuned, and tuned again via Unsloth using custom in house datasets and methods:
DavidAU/gemma-4-E4B-it-The-DECKARD-Expresso-Universe-HERETIC-UNCENSORED-Thinking
Exceeds Gemma4 26B-A4B in critical benchmarks.
Training a Gemma 4 Reap 19B-A4B right now ; should be done tomorrow, then testing.
RE: FRanken merge 26B-A3B ; yes, just need to make a map for Mergekit ; this is also in progress.
RE: Claudes ; depends on how reap turns out.
There are a lot of updates still in progress with Unsloth/Llamacpp RE: Gemma 4s atm too ;
There are also some dataset issues to address when training with Gemma 4s.
NOTE:
Just finished a number of fine tunes on Gemma 4's E4B ; which is a MOE LIKE model. These will release in the next day or so ; pending final testing.
UPDATE:
All of these are now up; and can be downloaded.
Awaiting quants.
RE: 13B:
=> one is upscaled + trained, the other is merge of two 9Bs fine tunes (and upscaled).
They are hidden as of this writing (undergoing private testing), awaiting final metrics / eval.
If they "pass" ; they will be made public.
These will be active within 24-48 hrs pending results.
Currently have full running 13B (GLM 4.7 Flash) - which is very strong ; and experimental 21Bs of Qwen 3.5.
These are trained.
These are in testing, and access is limited as of this writing.
As for MOEs:
This is a little more complicated as scripting must be written for Mergekit to "moe together" 0.8B, 2B, 4B, 9Bs etc etc.
A draft (by me) has been completed to do this; but not tested/debugged yet.
No time line here ; too many variables.
RE 35B moes ; it is possible to address this in a different way ; but I have not tried it yet.
This is a different approach than REAP.
9 Heretic Uncensored LFM fine tunes are now up at my repo:
https://huggingface.co/DavidAU/models?sort=created&search=lfm
Model card updates in progress as I write this.
The merges will take a wee bit longer.
...and 5 more new "non-heretic" ones too.
@muxodious ; Excellent.
In the cue.
Important note:
I can make the base models W reasoning datasets; however the "Kimi Mega Brain" is a complex merge of these base models (trained with different datasets) by Nightmedia.
I will query Nightmedia to see if he will do an updated "Heretic" mega brain merge after the "heretic" versions are complete.
Waiting for updates W HEretic/Transformers to make this possible with "thinking" LFM base.
For each model ; the quants are listed under quantizations.
Hey;
I am currently restricting access to the source presently due to past issues with abuse of the source (of my models), which lead to community issues due to non-disclosure of tech details of the model as well as issues related to non-attribution of multiple parties.
I may release it in a few weeks.