Microsoft is gearing up to roll out major upgrades to its Copilot feature, with the introduction of OpenAI’s latest GPT-4 Turbo model. This powerhouse model boasts a remarkable 128K context window, enhancing Copilot’s ability to understand queries and deliver more insightful responses. Currently in testing with select users, it is expected to be widely integrated into Copilot in the coming weeks, promising an enriched user experience.
In the meantime, users can enjoy improved capabilities with an upgraded DALL-E 3 model in Bing Image Creator and Copilot. This enhancement allows Copilot to generate higher-quality and more accurate images based on user prompts, offering a notable improvement in image creation.
“Copilot will write the code to answer your complex, natural-language requests, run that code in a sandboxed environment and use the results to give you higher quality responses,” explains Microsoft. “You can also upload and download files to and from Copilot, so you can work with your own data and code as well as Bing search results.”
For coders and developers, a new code interpreter feature is on the horizon for Copilot. This feature enables users to obtain more accurate calculations, data analysis, and code generation from the AI chatbot. Copilot will not only write code in response to natural language requests but also execute it in a sandboxed environment, providing higher-quality responses. Additionally, users can upload and download files, facilitating seamless collaboration with their own data and code, as well as Bing search results.
On the Bing front, Microsoft is introducing “Deep Search,” leveraging the power of GPT-4 to deliver optimized search results for complex topics. Activating Deep Search expands search queries, offering more comprehensive descriptions and delivering highly relevant results. These advancements collectively mark a significant leap in Microsoft’s AI capabilities, promising a more efficient and sophisticated user experience across various applications.