Last week, we wrapped up Google Cloud Next ‘24. Tens of thousands of attendees joined us in Las Vegas to experience the latest technologies and features that are helping organizations around the world accelerate their business transformation — and more than 300 customers and partners shared their gen AI successes.
At the event, our CEO Sundar Pichai described how Google has been building specialized AI infrastructure for more than a decade, positioning us at the forefront of the AI platform shift. Through Google Cloud, companies across all industries have access to decades of our research and innovation — along with our most capable AI technologies.
At the annual developer keynote, Google Cloud’s Chief Evangelist, Richard Seroter, and Sr. Developer Relations Engineer, Chloe Condon, explained how Google Cloud meets organizations where they are and is committed to helping them shape their future as they grow.
If you missed the big event, you can find all 218 things we announced over on the Google Cloud blog. Here’s a recap of five of the biggest moments.
1. Expanded access to the best models.
Gemini 1.5 Pro, Google’s most powerful foundation model, is now available in public preview in Vertex AI to Google Cloud customers. Thanks to Gemini’s multimodal capabilities, users can now process text, audio, video, code, and other content formats all from a single window.
We also added CodeGemma to Vertex AI, a new model from our Gemma family of lightweight open models.
2. Enhancements to our AI Hypercomputer architecture and more custom chips
We’ve continued to broaden our AI infrastructure to make developing with AI easy and efficient. Our AI Hypercomputer architecture enhancements, which combine TPUs, GPUs, AI software and more, will help boost efficiency and productivity across AI training, tuning and serving for developers and businesses.
We also announced that TPU v5p is now generally available and our new Arm-based Axion CPU chip is the latest custom Google-made silicon, helping to maximize performance and energy usage.
3. Better data and better coding for better AI
To build AI, your data should be accessible, organized and, above all, reliable. We’ve now made it possible to ground — in essence, to connect AI workloads to Google Search, leading to better, more accurate outputs.
Gemini for Google Cloud has a host of AI-powered features that make it easier to build and develop on the cloud — including Code Assist to help with writing and analysis of code; Cloud Assist for automating more of the operations and maintenance process; and AI enhancements to our security, databases and analytics offerings.
4. More creativity and collaboration
At Next, we shared a number of updates to help teams get more out of the tools they rely on every day. A few highlights include the announcement of Vids, an all new AI-powered video creation app for work that will sit alongside tools like Google Docs, Sheets and Slides.
We also shared that we’re adding translated caption support for 52 new languages in Translate for me in Meet (coming in June), which brings the total number of supported languages to 69. And we rolled out two new Workspace subscription add-ons for $10 per user, per month, which gives customers on select Workspace plans access to AI features in Meet and Chat, as well as security.
5. A look at not so secret agents
Over the past year-plus, thousands of Google Cloud customers have increasingly been developing what’s best thought of as AI agents. More than just answering queries, these agents are unique in that they can work semi-autonomously to take actions and achieve specific enterprise goals, like helping customers with purchases or programmers with writing code. So far, we’ve seen six distinct kinds of AI agents emerge: customer service, employee empowerment, creative ideation and production, data analysis, code creation and cybersecurity.
For more on Cloud Next ’24, check out our Next ‘24 Collection.