Google's Veo 2 becomes 広範囲にわたって 利用できる as it teases Gemini 2.5 Flash
When you buy through links on our articles, 未来 and its syndication partners may earn a (売買)手数料,委託(する)/委員会/権限.
What you need to know
Google teased Gemini 2.5 Flash at its Cloud Next 2025 会議/協議会, 説 the model is "coming soon."
The company's Veo 2 ビデオ-世代 model is becoming 広範囲にわたって 利用できる as part of the Google AI Studio.
The Live API for Gemini is also becoming 利用できる in preview for real-time interactions.
Google Cloud Next 2025 is 公式に here, and the first 始める,決める of developer-焦点(を合わせる)d 告示s 港/避難所't disappointed. This week, Google is 発表するing that its Veo 2 ビデオ-世代 model is 生産/産物 ready and 広範囲にわたって 利用できる for use 経由で the Gemini API in Google AI Studio. Additionally, t he company 確認するs that Gemini 2.5 Flash ― a 簡素化するd thinking model for basic 仕事s ― is "coming soon."
As a ビデオ-世代 model, Google says that Veo 2 is "able to follow both simple and コンビナート/複合体 指示/教授/教育s, 同様に as ふりをする real-world physics in a wide 範囲 of visual styles." Veo 2 will work with both text or image 誘発するs, or both. This means you could 述べる to Veo 2 what you want your AI-生成するd ビデオ to turn into, 供給する the model with an image to work with, or a blend of the two 選択s.
使用者s can try out Veo 2 in the Google AI Studio, and it can produce ビデオs at 720p 決意/決議 at 24 でっちあげる,人を罪に陥れるs per second. 現在/一般に, the big 制限 is length. ビデオs created with Veo 2 can only be a 最大限 of eight seconds long. As an 企業-grade 道具, Veo 2 costs $0.35 per second of ビデオ 生成するd.
As an example of what the model can do, Google shows off what Wolf Games is doing with Veo 2 in the ビデオ below.
The blog 地位,任命する also 含むd a quick teaser that Gemini 2.5 Flash is on the way. The Gemini 2.5 family of AI models debuted last month and has been 進歩ing at a 早い pace. It first became 利用できる as a Gemini 2.5 プロの/賛成の 実験の model 排除的 to 加入者s, only to become 利用できる for 解放する/自由な 使用者s days later. Now, we have an idea of what Google is planning for the Flash variant.
"This 進化 of our popular workhorse model will 持続する low latency and cost-efficiency while 会社にする/組み込むing thinking 能力s," the company said.
Finally, Google 発表するd that the Live API for Gemini is now 利用できる in preview. This 道具 is designed to help with real-time interactions and "enables developers to build 使用/適用s and スパイ/執行官s that 過程 streaming 音声部の, ビデオ and text with low latency."
You can 推定する/予想する more 告示s to come out of Google Cloud Next 2025, which is 現在/一般に 現在進行中の. Just yesterday, the company 発表するd a new Gemini 見解/翻訳/版 for Android Studio.