Skip to content

Instantly share code, notes, and snippets.

@eonist
Created December 28, 2025 12:22
Show Gist options
  • Select an option

  • Save eonist/7a4ef1e39b57ea1b34982b7f98e19999 to your computer and use it in GitHub Desktop.

Select an option

Save eonist/7a4ef1e39b57ea1b34982b7f98e19999 to your computer and use it in GitHub Desktop.
AFM vector

Based on the available documentation about Apple Intelligence, the statement that "AFM cannot vectorize data" appears to be partially true in a specific context.[1]

Apple Foundation Models (AFM) are primarily designed for text generation, summarization, extraction, and other language understanding tasks rather than generating embedding vectors for use in vector databases. The Foundation Models framework provides capabilities for generating text content, creating Swift data structures, and tool calling, but does not explicitly include functionality for generating text embeddings or vectorizing data for retrieval augmented generation (RAG) applications.[1]

Alternative Vectorization Options

For developers needing to vectorize data on Apple devices, separate solutions are available:

  • NLEmbedding: Apple provides the NLEmbedding framework from Natural Language, which can generate embeddings entirely offline on-device, though the quality may not match cloud-based embedding models[2]
  • Third-party solutions: Developers can use OpenAI or other services to create embedding vectors and store them locally in databases like GRDB, VecturaKit, or ObjectBox[2]

What AFM Does Include

While AFM doesn't generate embeddings for external use, the models internally use embedding layers that map inputs into vectors for processing. The on-device model uses a vocabulary size of 49K tokens, while the server model uses 100K tokens. These embedding layers are quantized to 4 bits per weight to reduce memory requirements.[3][1]

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment