Eulerwa Inc. opens every layer of AI development — from model training to hardware acceleration to autonomous agents — through CLI and open source.
EulerForge for training, EulerGo for Go AI research, EulerWeave for data, EulerAgent for execution, EulerPress for publishing, EulerNPU for hardware, EulerAtlas for robot learning, and model design with EulerStack.
A research-oriented LLM fine-tuning framework that injects LoRA into HuggingFace models and lets you train a dense model as a Mixture-of-LoRAs or MoE Expert LoRA structure. On top of a familiar dense-SFT workflow, EulerForge adds Dense → MoE conversion and phase scheduling, so routing, expert specialization, and MoE stability can be studied reproducibly on a modest GPU budget. A single YAML preset carries you through SFT → DPO/ORPO → RM → PPO.
EulerForge provides a standardized workflow for converting a dense model into an MoE-style trainable one, so experiments can be expressed in configuration rather than in glue code.
mixture_lora / moe_expert_lora| Injection | Dense LoRA · Mixture LoRA · MoE Expert LoRA · Native MoE Expert LoRA |
|---|---|
| Training | SFT · DPO · ORPO · RM · PPO |
| Backbones | Qwen2/3 · Llama 2/3 · Gemma 3 · Gemma 4 (dense+MoE) · Mixtral |
| Quantized training | nf4 / int4 / int8 (bitsandbytes) |
| Extras | HF Export · Optuna grid/bayesian search · integrated benchmark · 5-language CLI |
24 KO/EN tutorials + full CLI reference included
A comprehensive data processing system that bridges the gap between raw datasets and production-ready model training.
Learn MoreDesigned for researchers and developers to refine and analyze data in local environments.
Business-critical features for scaling from a single GPU to cluster-level operations.
A model assembler that lets you compose a hybrid LLM of any scale from a single YAML spec — mixing Attention, Mamba, RetNet, Hyena mixers with MoE FFN blocks. A 5-layer pipeline (DSL → Schema → IR → Compiler → CLI) runs validation and normalization, then emits a HuggingFace PreTrainedModel (config.json + model.safetensors) that EulerForge can pick up directly for training — so design → assembly → fine-tuning flow as one path.
15 arch_ presets (all ~2B, skill-level walkthrough) reproduce published architectures at a fixed budget, and 16 llm_ presets cover 4 sizes × 4 variants. Presets are starting points — tweak d_model and layer count to assemble a model of any size.
| Mixers | Attention · Mamba · RetNet · Hyena |
|---|---|
| FFN | MLP · Gated MLP (SwiGLU) · MoE (top-k routing) |
| Skill-level walkthrough | beginner (GPT-2/Llama) · intermediate (Mistral/Gemma2/Qwen) · advanced (Jamba/Samba/RetNet) · expert (MoE × mixer 2D design space) |
| Compile target | HuggingFace model directory or JSON runtime config |
Three-stage validation (schema → cross-field compatibility → realism heuristics) catches design errors before compilation. All CLI messages are translated into 5 languages (ko/en/zh/ja/es).
A local-first agentic framework with an 8-state machine, Pattern/Graph orchestrators, RAG, Long-Term Memory (SQLite), MCP integration, 30+ CLI commands, and a plugin system. Every action is logged, audited, and approved by humans.
Learn MorePhilosophy: Every action by an agent that changes the world (file writes, shell execution, external calls) must be approved by a human.
30+ commands across core, approval, RAG, memory, workflow, pattern, and MCP groups. Built for developers who want autonomous capabilities without sacrificing control and security.
A CLI-based imitation learning framework that trains policies from expert demonstrations and evaluates them in simulation. 4 classic domains (car, drone, humanoid, robot dog) and 4 industrial domains (mobile manipulation, warehouse AGV, smart farm agri-robot, shipyard crane) — 8 domains on a unified schema. 11 command groups including edge deployment, with a Domain Plugin architecture.
Learn MoreAll modules communicate consistently through fixed obs/action dimension schemas per domain.
| Classic (01-04) | Car 4/2 · Drone 6/3 · Humanoid 12/6 · Robot Dog 12/8 |
|---|---|
| Industrial (05-08) | Mobile Manipulation 18/8 · Warehouse AGV 14/4 · Agri-Robot 16/6 · Shipyard 20/9 |
| Policy Models | BC-MLP, BC-RNN, BC-CNN, ACT, Diffusion Policy |
| Levels | L0 Toy · L1 Intermediate · L2 Advanced (real backend) |
11 command groups with Domain Plugin architecture for extensibility. Initialize, train, and run simulation rollout in 3 commands.
An NPU inference stack that defines 123 operators across 13 groups with 10 data types, compiles spec.yaml to .npuart artifacts, and targets Zynq-7020 FPGA for hardware simulation. A single CLI covers the entire workflow from validation to deployment.
Learn More123 operators organized in 13 groups, supporting 10 data types. The compilation pipeline transforms spec.yaml into deployable .npuart artifacts.
| Operators | 123 operators in 13 groups (arithmetic, activation, reduce, etc.) |
|---|---|
| Data Types | 10 dtypes (float32, float16, bfloat16, int8, uint8, etc.) |
| Compilation | spec.yaml → validate → compile → .npuart |
| FPGA Target | Zynq-7020 board support |
11 subcommands cover the entire NPU development workflow from validation to profiling and deployment.
| Core | validate, compile, run, sim |
|---|---|
| Analysis | profile, explain, benchmark |
| Hardware | board-smoke, calibrate, replay |
| Optimization | compress-cache |
A CLI research platform for Go AI, designed so that distinct personal styles (pungmyo, 기풍) can be learned, compared, and preserved on top of a strong shared skill layer. It is our research-side response to a long-running worry in the Go community — that, in the age of AI, every professional's moves seem to converge toward a single "correct" answer. EulerGo treats the individuality visible in the first ~50 moves as a learnable signal via a style latent and multi-algorithm comparison, aiming at a research environment where many characters coexist rather than one monolithic super-engine.
Learn More9 base input channels and N style-latent channels are broadcast into the input tensor, so multiple "style heads" can coexist on one backbone and be trained or swapped — a reproducible way to treat the individuality of the first 50 moves as a learnable signal.
Swap the search algorithm like a knob for fair comparison.
| Algorithms | PUCT MCTS · Gumbel (Sequential-Halving) · QZero (searchless) · PGS (tree-free) · Native C++ MCTS · Random baseline |
|---|---|
| Evaluation | Round-robin league + iterative ELO + bootstrap CIs |
| Board | 9×9 / 19×19 (Chinese scoring, ko/superko, SGF) |
| Extensions | Distributed selfplay (coordinator/worker) · web dashboard · Toga GUI app · i18n CLI (ko/en/zh/ja/es) |
Designed as a research-side response to a long-running concern in the Go community.
A Local-First CLI tool that precisely translates only prose text while perfectly preserving code blocks, math expressions, and URLs. From enterprise document localization to AI training JSONL data translation.
Learn MoreAccess specialized technical publications and the open-source ecosystem.
In-depth research books on quantum computing and AI architectures.
Community tools for data processing and model orchestration.
Domain-specific solutions built on the Eulerwa stack.
A healthcare agent that provides personalized body condition analysis and nutrition-based "Food over Medicine" guidance.
An autonomous coding system that leverages EulerAgent to automate design, debugging, and internal software lifecycle management.
Technology exists to serve humanity
We oppose the military automation of artificial intelligence. Technology must be used to protect life, not to take it — no technological achievement can take precedence over human safety.
Artificial intelligence must serve as a tool that elevates human dignity and strengthens democratic values. We oppose any use of AI that suppresses individual freedom or undermines democratic principles.
All software, models, and services released by Eulerwa may not be used for purposes that violate these principles. We take our responsibility for how our technology is used seriously.