Google Gemini Release Review

After a few delays and a lot of hype, Google just released Gemini, their new GenAI model. In this video I break down the 60 page paper they released in support of the launch, with highlights below.

Here are the TL:DW highlights:

  • Gemini outperformed GPT-4 in 30 out of 32 benchmarks – but take this with a grain of salt as the source is Google itself

  • Gemini has three versions: Nano (for phones), Pro (comparable to GPT-3.5), and Ultra (comparable to GPT-4, but not yet released).

  • Gemini Pro has replaced Palm-2, which powered Bard and was known to have some major issues

  • Gemini was trained on multi-modal data and can handle diverse inputs (text, video, audio, images) and output both text and images

  • Education use case: Gemini can analyze complex student submissions that combine text and visuals and does well on academic benchmarks

  • The model has a 32K context window, significantly smaller than both GPT-4 Turbo and Claude

  • Unfortunately the lack of transparency trend across GenAI continues with no transparency on training data, bias mitigation, red-teaming, and harm mitigation

  • Gemini hallucinates (makes things up that sound true) just like all other GenAI models

    With no model cards yet and no access to Gemini Ultra, there is still quite a lot we still don't know. But I'm looking forward to seeing if there finally is a contender in the best LLM race that OpenAI keeps winning

With no model cards yet and no access to Gemini Ultra, there is still quite a lot we still don't know. But I'm looking forward to seeing if there finally is a contender in the best LLM race that OpenAI keeps winning.

Have you had a chance to try Gemini out yet? What do you think?

Previous
Previous

How Young People Are Using AI

Next
Next

6 Strategies for Using AI for Social-Emotional Learning