OpenAI releases GPT-4, multimodal large-scale language model

In November 2022, OpenAI developed ChatGPT, a chatbot model and artificial intelligence-powered natural language processing tool capable of engaging in conversation by learning and understanding human language. Additionally, it interacts based on context and can accomplish tasks such as writing emails, video scripts, copy, translations, and code. Earlier this year, ChatGPT garnered immense attention, becoming a popular topic of discussion.

Today, OpenAI announced the latest iteration of its multimodal large-scale language model, GPT-4. According to the official description, this model is more creative and collaborative than any previous AI system, possessing a broader knowledge base and superior problem-solving abilities. With improved performance in advanced reasoning, GPT-4 surpasses the current public version of ChatGPT based on GPT-3.5, significantly enhancing answer accuracy and rendering the new ChatGPT seemingly more intelligent. In addition to text, GPT-4 accepts image input, recognizes scenes, and provides descriptions. OpenAI is currently collaborating with the Be My Eyes project, which serves visually impaired individuals, to conduct testing.

With a context length limit of approximately 25,000 words, GPT-4 is suitable for generating larger documents and analyses and supports multiple languages. Initially, OpenAI will offer GPT-4 to paying ChatGPT Plus customers at a monthly fee of $20, with global availability. As with previous models, developers will access GPT-4 through an API, and current developers can join the GPT-4 waiting list.

ChatGPT’s worldwide success owes much to the tremendous computational power provided by the Azure cloud computing platform. Microsoft has invested billions of dollars in OpenAI and recently announced a new massively-scalable virtual machine. This machine integrates NVIDIA’s latest H100 compute card, Quantum-2 InfiniBand network platform, and Intel’s fourth-generation Intel Xeon Scalable processors to meet the immense computational workloads required for training and scaling AI.