Meta has joined the ranks of large tech businesses by introducing its own generative AI model, Llama. Llama differs from models, such as OpenAI's GPT-4, Anthropic's Claude, and Google's Gemini, in that it is "open," meaning that it is publicly available. Developers can download and use Llama with few restrictions, providing far more flexibility and control.
Meta has partnered with cloud service providers such as AWS, Google Cloud, and Microsoft Azure to deliver hosted versions of Llama. The company also offers several tools that will help users fine-tune and personalize the model to meet their individual requirements.
What is a Llama?
Llama is a family of AI models, each with its own set of features designed to meet certain demands. The most recent versions—Llama 3.1 8B, Llama 3.1 70B, and Llama 3.1 405B—were released in July 2024. These models are trained using various data sources, such as multilingual web pages, publicly available code, academic papers, and even synthetic data generated by other AI models.
The Llama 3.1 8B and Llama 3.1 70B are smaller, more compact variants designed to run effectively on various devices, including laptops and servers. The Llama 3.1 405B, on the other hand, is a large-scale model that often requires data center-level gear to function unless it is modified for specialized applications. The 8B and 70B variations are less powerful than the 405B model, but they are speedier and have fewer storage requirements and latency.
What can Llamas do?
Llama can execute a wide range of tasks that are common to generative AI models. It can help with coding, solving simple math problems, and summarizing texts in various languages, including English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai. Although it does not yet enable image processing or production, this may change in future releases.
Moreover, Llama models can be configured to use third-party apps, tools, and APIs for specialized activities. By default, they can use Brave Search to get current information, Wolfram Alpha to answer complicated maths and scientific queries, and a Python interpreter to test code. Meta also says that Llama 3.1 models can use tools for which they were not expressly trained, albeit their efficiency with unfamiliar tools may vary.