Hugging Face Megalodon AI article

Hugging Face Megalodon AI

On this page you can learn more about the architecture of the Megalodon AI by exploring the Hugging Face Megalodon AI article. Megalodon is a neural architecture for efficient sequence modeling with unlimited context length. Megalodon inherits the architecture of Mega, and further introduces multiple technical components to improve its capability and stability, including CEMA, timestep normalization layer, normalized attention mechanism and pre-norm with two-hop residual configuration. In a controlled head-to-head comparison with Llama2, Megalodon achieves better efficiency than Transformer.

Please see the full article at: https://huggingface.co/papers/2404.08801 or https://arxiv.org/abs/2404.08801. The abstract is provided below for domain valuation research.


Authors:

Xuezhe Ma ,

Beidi Chen ,

Omer Levy ,

Chunting Zhou

Abstract

The quadratic complexity and weak length extrapolation of Transformers limits their ability to scale to long sequences, and while sub-quadratic solutions like linear attention and state space models exist, they empirically underperform Transformers in pretraining efficiency and downstream task accuracy. We introduce Megalodon, a neural architecture for efficient sequence modeling with unlimited context length. Megalodon inherits the architecture of Mega (exponential moving average with gated attention), and further introduces multiple technical components to improve its capability and stability, including complex exponential moving average (CEMA), timestep normalization layer, normalized attention mechanism and pre-norm with two-hop residual configuration. In a controlled head-to-head comparison with Llama2, Megalodon achieves better efficiency than Transformer in the scale of 7 billion parameters and 2 trillion training tokens. Megalodon reaches a training loss of 1.70, landing mid-way between Llama2-7B (1.75) and 13B (1.67). Code: https://github.com/XuezheMax/megalodon

Pages: 1 2 3