Meta announces the launch of the AI ​​model Llama 2 with open source architecture design

Following the initial unveiling of the artificial intelligence model Llama earlier this year, Meta recently announced the launch of Llama 2, entering into collaborations with Microsoft and Qualcomm to implement this AI model in cloud services and mobile products.

The training parameter scale of Llama 2 has been expanded to two trillion sets, a more than twofold increase compared to its predecessor, while simultaneously relaxing the constraints on the length of context data by a factor of two.

Beyond maintaining its open-source architecture, Llama 2 also retains its application flexibility, compatible with 7 billion, 13 billion, and 70 billion parameter operational modes. This maintains the model’s deployment versatility. For instance, in addition to the recent announcement of the collaboration with Microsoft to implement it in their Azure cloud platform service, it can also be utilized in Qualcomm’s processor products. This allows devices leveraging Qualcomm processors, such as mobiles and PCs, to harness greater artificial intelligence computational capabilities through Llama 2.

As part of the partnership with Microsoft, Llama 2 will be integrated into Azure’s cloud services and will also be installed in the Windows operating system. Coupled with previous collaborations with OpenAI on GPT-3, GPT-4, and ChatGPT, this signals Microsoft’s commitment to maintain extensive application flexibility in artificial intelligence.

In terms of the collaboration with Qualcomm, it is anticipated that by 2024, mobiles and PCs equipped with Qualcomm processor products will be able to utilize Llama 2. This implies that Qualcomm’s next-generation Snapdragon processor, expected to be revealed this October, will integrate Llama 2 directly, thereby promoting device-level AI computation.