v1.2.12
Version 1.2.12
What’s New
- Built-in local models — Run MLX-based models like Qwen 3, Gemma 4, Mistral, and Phi directly from Elvean. No Ollama, no terminal, no setup — just pick a model and start chatting.
- Powered by SwiftLM — Native Swift inference server running on Apple Silicon via Metal. Fast, private, fully offline.
- Zero-config fallback — No API keys? No problem. Local models work out of the box so you can start using AI immediately.