At MWC 2024, 5G and AI are still the topics of most concern. For example, Qualcomm officially launched the new Qualcomm AI Hub at today’s MWC launch event to create a center for developers to obtain development resources, so that they can create AI applications based on Snapdragon or Qualcomm platforms.
Qualcomm launches new AI Hub
Specifically, Qualcomm AI Hub can provide developers with a fully optimized AI model library, including traditional AI models and generative AI models, and can support deployment on Snapdragon and Qualcomm platforms. Developers only need to select the model required for the application and the framework used to develop the application, and then determine the target platform, such as a specific model of mobile phone, or a specific model of Qualcomm platform.
After completing this, Qualcomm AI Hub can Provide developers with models optimized for their designated applications and designated platforms. Developers only need a few lines of code to obtain the model and integrate the model into the application.
Qualcomm AI Hub will support more than 75 AI models, including traditional AI models and generative AI models. By optimizing these models, developers can run AI inference up to 4 times faster.
Not only is the speed improved, the optimized model will also take up less memory bandwidth and storage space, resulting in higher energy efficiency and longer battery life.
These optimized models will be available on Qualcomm AI Hub, HuggingFace and GitHub, allowing developers to easily integrate AI models into workflows.
In addition to the new AI Hub, Qualcomm also demonstrated the world’s first multi-modal large model (LMM) running on an Android phone equipped with the third-generation Snapdragon 8 at the event. In this demonstration, Qualcomm showed an LMM with more than 7 billion parameters, which supports text, voice and image input, and can conduct multiple rounds of dialogue based on the input content.
At the same time, Qualcomm also brought another multi-modal AI demonstration on Windows PC equipped with the new Snapdragon X Elite platform. This is the world’s first large multi-modal model for audio inference running on Windows PC. It can understand birdsong, music, or different sounds in the home, and can conduct conversations based on this information to help users.
For example, multi-modal large language models can understand the type and style of music input by users, provide users with music history and similar music recommendations, or adjust the surrounding music for users through dialogue.
These models are optimized to achieve outstanding performance and energy efficiency, and run entirely on the device side, enhancing privacy, reliability, personalization and cost advantages.
In addition, Qualcomm also demonstrated their first LoRA model running on Android phones. LoRA can adjust or customize the generated content of the model without changing the underlying model. By using a very small adapter (only 2% of the size of the model, easy to download), the behavior of the entire generative AI model can be customized.