AIGuys

Deflating the AI hype and bringing real research and insights on the latest SOTA AI research…

Follow publication

Member-only story

Featured

Forget LLMs, It’s Time For Large Concept Models (LCMs)

--

LLMs have revolutionized the field of artificial intelligence and have emerged as the de facto tool for many tasks. The current established technology of LLMs is to process input and generate output at the token level. This contrasts sharply with humans who operate at multiple levels of abstraction, well beyond single words, to analyze information and generate creative content.

Large Concept Model (LCM), substantially differs from current LLMs in two aspects: 1) all modeling is performed in a high-dimensional embedding space instead of on a discrete token representation, and 2) modeling is not instantiated in a particular language or modality, but at a higher semantic and abstract level.

So, are you excited to dive deeper into this brand-new paper from Meta?

Photo by Andrey Metelev on Unsplash

Understanding The Limitations of LLMs

LLMs are trained on what we call tokens. Tokens are nothing but chunked-up words.

A token is a segment of text that the model processes as a single unit.

For example:

  • Word-based tokenization: “Artificial Intelligence” → ["Artificial", "Intelligence"]
  • Subword tokenization: “Artificial Intelligence” → ["Art", "ificial", "Int", "elligence"]

--

--

AIGuys
AIGuys

Published in AIGuys

Deflating the AI hype and bringing real research and insights on the latest SOTA AI research papers. We at AIGuys believe in quality over quantity and are always looking to create more nuanced and detail oriented content.

Responses (2)

Write a response