← AI Glossary

What Is a Large Language Model (LLM)?

A large language model (LLM) is a type of artificial intelligence trained on billions of words of text to understand, generate, and reason about human language. Models like GPT-4, Claude, Gemini, and Llama are all LLMs.

How Do LLMs Work?

LLMs are built on the transformer architecture. During training, the model learns statistical patterns across massive text corpora — books, websites, code, and more. This allows it to:

  • Generate coherent, contextually relevant text
  • Summarize long documents
  • Translate between languages
  • Write code in dozens of programming languages
  • Reason through multi-step problems

Key Concepts

  • Parameters: The learned weights that define the model’s knowledge. GPT-4 is estimated to have over 1 trillion parameters.
  • Tokens: LLMs process text as tokens — subword units that the model reads and generates.
  • Context window: The maximum amount of text an LLM can process in a single conversation.
  • Temperature: A setting that controls how creative or deterministic the model’s output is.
ModelProviderKey Strength
GPT-4oOpenAIGeneral-purpose reasoning
ClaudeAnthropicLong context, safety
GeminiGoogleMultimodal (text + images)
LlamaMetaOpen-source, local deployment
MistralMistral AIEfficient, open-weight

Running LLMs Locally

With tools like Ollama, you can run open-source LLMs directly on your Mac without sending data to the cloud. Elvean connects to both local models via Ollama and cloud APIs — giving you the best of both worlds in one native app.

Elvean brings all these concepts together in one native Mac app — local models, cloud APIs, agentic tools, and more.

Learn more about Elvean