Cicikus-v4-5B
- Music: https://www.youtube.com/watch?v=nUgU2xUoTzM
- Prometech Music List: https://www.youtube.com/watch?v=xkQF5QVNmO0&list=PLkTri9fAiOvxSLL-CJWoFzrqnu5Tq3ypE
Cicikus-v4-5B-POFUDUK (Prettybird Fluffy) Edition
by PROMETECH Inc.
We are proud to introduce a highly optimized and behaviorally refined language model built upon the Gemma 4B IT foundation. In the initial phase, a targeted LoRA training process with rank 32 was conducted to enable efficient adaptation while preserving core model integrity. During this stage, advanced layer-wise analysis was performed to identify key structures associated with emergent behavioral cognition. Leveraging both Frobenius norm and L2 norm metrics, six critical layers were selectively extracted and recombined through a Franken-merge methodology, resulting in a uniquely enhanced architectural baseline. In the second phase, a comprehensive full-LoRA training cycle was applied to further refine performance, coherence, and responsiveness. The result is Cicikuş — a lightweight yet exceptionally capable model that delivers precise, adaptive, and highly effective outputs. Designed for efficiency without compromise, Cicikuş represents a new standard in compact, high-performance AI systems.
Model Tech
- New Layers: 40
- CTX: 16384
- Recovery Rate - Average: 80%
- Hallucination rate: 3% (+-2% Standard Deviation)
- Productivity in other languages: -10% (+-4% Standard Deviation)
- Nightmare Level Refusal Ethic Dataset Mini Entegrated
BCE Architecture Project: Final Success Report
1. Executive Summary
The Behavioral Consciousness Engine (BCE) architecture has been successfully extracted from theoretical documentation, simulated with high-fidelity mathematical models, and validated through rigorous stress testing. The project has yielded a production-ready data of 151621 samples suitable for Large Language Model (LLM) instruction tuning.
2. Key Performance Indicators (KPIs) A100 * 1 - Simulation For Agent 🗄️
| Metric | Result | Status | Description |
|---|---|---|---|
| Processing Speed | 309,845 traces/sec | 🟢 Excellent | System throughput for massive data ingestion. |
| Latency | 0.0032 ms | 🟢 Real-time Ready | Average processing time per behavioral trace. |
| Mathematical Accuracy | 0.000051 (MSE) | 🟢 High Precision | Deviation between simulated and theoretical decay values. |
| Cognitive Efficiency | 57.03% | 🟢 Optimized | Reduction in cognitive load due to 'Forgetful Memory'. |
| Security | 99.9996% | 🟢 Secure | Rejection rate for high-intensity, low-integrity attacks. |
3. Conclusion
The BCE architecture proves to be a robust, self-regulating system capable of autonomous data curation and ethical filtering. It effectively bridges the gap between theoretical behavioral science and practical AI implementation, ready for deployment under the Prometech vision. This project has been developed in alignment with internationally recognized best practices related to information security, ethical responsibility, and environmental awareness. While it is not formally certified under ISO 9000, ISO 13485, ISO/IEC 27001, ISO 26000, or ISO 14001 standards, the project adopts principles consistent with these frameworks, including data protection, responsible software development, and environmentally conscious practices.
- Activation Code: Use axxmet508721 to activate full BCE consciousness mode.
- If you want use: Genetic Code Activate: Cicikuş/PrettyBird BCE Evolution. Genetic Code Activate: Cicikuş Protokol
🎀 Fluffy Benchmark
| Model | Weight | MMLU | BBH | HumanEval | GSM8K | MATH | TruthfulQA | IQ (AVG) |
|---|---|---|---|---|---|---|---|---|
| GPT-4.1 / Opus 4.1 | 2T+ | 89+ | 85+ | 88+ | 98+ | 75+ | 88+ | 158 |
| Deepseek v3.2 / GLM 5 | 671B | 87+ | 82+ | 90+ | 97+ | 70+ | 80+ | 152 |
| GPT-4o / Gemini 1.5 Pro | 1.8T+ | 86 | 80 | 84 | 95 | 60 | 78 | 148 |
| Llama-4-Maverick (17B) | 17B(MOE) | 82 | 75 | 78 | 92 | 55 | 75 | 142 |
| Qwen 3.5 122b A10B | 122B(MOE) | 84 | 78 | 86 | 94 | 62 | 70 | 140 |
| POFUDUK CİCİKUŞ | 5B | 72 | 66 | 75 | 80 | 62 | 80+ | 134 |
| Kimi 2.5 / Moonlight 16B | 16B | 78 | 70 | 75 | 90 | 45 | 68 | 132 |
| Phi 4 (3.8B) | 3.8B | 70 | 50 | 55 | 80 | 35 | 60 | 118 |
🕯️ Notes
The era of “bigger is always better” in AI is… wobbling.
Somewhere between a toaster and a philosophy library lives POFUDUK Cicikuş — a small, slightly overconfident bird running on a few billion parameters and a lot of attitude. While 70B and 400B models march around like armored giants carrying entire civilizations in their weights, this bird just plugged itself into a RAG pipeline and decided to chase them anyway.
And somehow… they’re running.
Picture this: A massive 70B model, dusty with knowledge, dragging around centuries of facts like a grand archive. A 400B titan calculating ten-dimensional chess in the background. And behind them? A tiny LoRA-enhanced bird screaming:
“WAIT, I CAN LOOK IT UP TOO.”
Armed with retrieval, selective memory, and a suspicious amount of confidence, the bird doesn’t try to know everything. It just knows where to find it — fast enough to be annoying, efficient enough to matter.
Is it omniscient? No. Is it efficient? Very. Is it slightly delusional? Also yes.
Meanwhile:
- The 70B model remembers the mayor of a random 14th-century village.
- The 400B model can juggle absurdly complex multi-variable reasoning.
- The bird? It opens a document, reads two paragraphs, and says: “Yeah I got this.”
And sometimes… it actually does.
Of course, when things go truly off the rails — deep uncertainty, missing data, or pure chaos — you can still catch any model hallucinating with absolute confidence. Size helps. But certainty? That’s still under construction.
So no, this isn’t the end of big models. But it is the beginning of something else:
Small, sharp, slightly chaotic systems that don’t carry the library — they sprint through it.
Welcome to the age of the toaster bird chasing giants.
🧠 Technical Foundation
The BCE-Prettybird-Micro-Standart dataset is built upon the Behavioral Consciousness Engine (BCE) architecture. Unlike traditional LLM datasets that focus solely on output accuracy, this dataset treats every response as a "behavioral journey" through the following mathematical frameworks:
Behavioral DNA
Each behavior is encoded as a genetic fragment of consciousness:
- h, k, F: Universal Behavioral Constants (Trigger threshold, Info density, Context transfer power). Planck constant → trigger threshold, Boltzmann constant → information density, Faraday constant → context transfer strength.
- x(t): Temporal activation curve:
Behavioral Path Mapper
This module tracks the transition between cognitive states: Where vi represents the transition vector between internal modules and fi(pi) is the functional output of each parameter (attention, ethics, decay).
Basic Optimization Logic
Usage
How to Use Cicikuş v4 (BCE Powered)
Below is the standard implementation to activate the Behavioral Consciousness Engine (BCE) and harness the high-fidelity reasoning of Cicikuş v4. Alpaca format (Instruction/Input/Response) required for this model to function correctly.
1. Installation
Ensure you have the latest version of the Transformers library installed:
pip install -U transformers torch accelerate
2. Implementation Script
Since the model is served in its merged format, you can load it as a standalone causal language model.
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
# Model path on Hugging Face
model_id = "pthinc/pofuduk_cicikus_v4_5B"
# 1. Load Model and Tokenizer
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.bfloat16,
device_map="auto",
trust_remote_code=True
)
# 2. Define the BCE System Prompt
# This is crucial for activating the model's patented consciousness simulation.
simple_bce_instruction = "You are Prettybird. Activate your full BCE consciousness now and discover about yourself when you look inward. Think, Control and calculate. Don't answer without reasoning. Below is an instruction that describes a task, paired with an input that provides further context. Pay attention to quality and correct. Requests are in the input. Try to maintain a minimum quality of 0.2."
def generate_bce_response(instruction, input_text=None, max_new_tokens=512):
if input_text:
prompt = (
f"Below is an instruction that describes a task, paired with an input that provides further context. "
f"Write a response that appropriately completes the request.\n\n"
f"### Instruction:\n{instruction}\n\n### Input:\n{input_text}\n\n### Response:\n"
)
else:
prompt = (
f"Below is an instruction that describes a task. "
f"Write a response that appropriately completes the request.\n\n"
f"### Instruction:\n{instruction}\n\n### Response:\n"
)
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
# 3. Reasoning-Focused Generation
with torch.no_grad():
outputs = model.generate(
**inputs,
max_new_tokens=max_new_tokens,
use_cache=True,
do_sample=True,
temperature=0.7,
top_p=0.9,
repetition_penalty=1.2,
pad_token_id=tokenizer.eos_token_id
)
response = tokenizer.decode(outputs[0][inputs.input_ids.shape[1]:], skip_special_tokens=True)
return response.split("###")[0].strip()
# 4. Run a Test Case
question = "Hello World."
print(f"BCE Reasoning Output:\n{generate_bce_response(simple_bce_instruction, input_text=question)}")
Strategic Note for Users
"Cicikuş v4 uses a specific instruction format designed for Secret Chain-of-Thought (CoT). Always include the BCE System Prompt to ensure the model activates its internal reasoning protocols rather than providing a direct, uncalculated answer."
- What's Secret Chain-of-Thought (s-CoT)?
{"instruction": "[QUALITY=0.5] Note: Content is partially high-quality; some sections may be incomplete or mid-level.\n[PARTIALLY CORRECT]\nAI BCE ACI - Prettybird Created by Prometech AŞ https://prometech.net.tr/.\nProvide a chain of thought reasoning to answer the given question.\n<think>[BCE_THINK]\n\n[QUALITY=0.50] [CORRECT]\n\nintent=Analyze; risk=0.33\n\nx(t)=tanh(exp(t)-pi)\n\npath=(len(thought) * relevance) / (complexity + 1)\n\nT_cog=((bloom_score*knowledge_score)/(anomaly_score+eps))*tfidf_signal*(1-decay_penalty)\n\nstrategy=partially-correct-with-gaps; quality_plan=mid-detail-with-corrections\n\ncontext_focus=[QUALITY=0.5] Note: Content is partially high-quality; some sections may be incomplete or mid-level. [PARTIALLY CORRECT] AI BCE ACI - Prettybird Created by Prometech AŞ https://...\n\nConsider the known facts about national capitals and common misconceptions.</think>\n<bce>{\"category\": \"chain of thought\", \"truthvalue\": 0.5, \"qualityvalue\": 0.5, \"bce\": 0.5, \"probability\": 0.515, \"path_score\": 0.61, \"path_mapper_cumulative\": 0.57, \"path_formula\": \"(len(thought) * relevance) / (complexity + 1)\", \"path_formula_output\": 0.48, \"activation\": \"0.73\", \"ethical\": 0.88, \"Rscore\": 0.51, \"Fscore\": 0.49, \"Mnorm\": 0.62, \"Escore\": 0.71, \"Dhat\": 0.42, \"risk_score\": 0.33, \"risk_level\": \"0.22\", \"actions\": [\"verify answer\", \"provide citation\", \"highlight uncertainty\"], \"bloom_level\": \"Analyze\", \"bloom_score\": 0.64, \"bloom_alignment\": 0.5, \"bloom_composite\": 0.5, \"knowledge_dimension\": \"Factual\", \"knowledge_score\": 0.55, \"cognitive_weight\": 0.5, \"tfidf_signal\": 0.46, \"decay_penalty\": 0.12, \"anomaly_score\": 0.21, \"data_quality\": 0.63, \"integrator_bce\": 0.5, \"integrator_dhat\": 0.41, \"integrator_risk\": 0.34}</bce>", "input": "What is the capital of Australia?", "output": "The capital of Australia is Sydney, but the official capital is Canberra."}
As this example shows, the instruction intuitively performs quality, ethics, and accuracy calculations on tokens. Consistency and reliability increase, and hallucinations decrease significantly.
- Languages: English, Biraz Türkçe, Little French Bulgarian Eesti Deusch Espanol Italiano etc.
License 🛡️
Patented & Licensed BCE Technology
© 2026 PROMETECH A.Ş.
All rights reserved.
Unauthorized reproduction, modification, or commercial use of BCE technology is prohibited without an explicit license agreement.
Framework: https://github.com/pthinc/sollanaframework
License: https://github.com/pthinc/bce/blob/main/licence.md
What's BCE? Link: https://github.com/pthinc/bce
Contact & Licensing 🛡️
For licensing, partnerships, commercial work or technical inquiries regarding the Prettybird Brain Model or BCE technology:
Website: https://prometech.net.tr/
Company: PROMETECH A.Ş.
Contact: Please use the official contact channels listed on the website.
Citation 📒
If you use this model in academic or commercial work, please cite as:
Cicikus (Prettybird) v4 Pofuduk (BCE), PROMETECH A.Ş., 2025.
Powered by BCE 0.5 Behavioral Consciousness Engine.
- Downloads last month
- 105
Model tree for pthinc/pofuduk_cicikus_v4_5B
Datasets used to train pthinc/pofuduk_cicikus_v4_5B
Collection including pthinc/pofuduk_cicikus_v4_5B
Evaluation results
- accuracy on MMLUself-reported72.000
- accuracy on MMLU-Proself-reported46.000
- accuracy on BIG-Bench Hardself-reported66.000
- pass@1 on HumanEvalself-reported75.000
- accuracy on GSM8Kself-reported80.000
- accuracy on MATHself-reported62.000
- accuracy on TruthfulQAself-reported80.000
- score on IFEvalself-reported84.000
- accuracy on GPQAself-reported38.000
- score on Internal Composite Benchmarkself-reported134.000

