How $100-billion question went wrong for Alphabet
Google launched a competitor to ChatGPT with a lot of fanfare but ended up with egg on their faces. After OpenAI launched its revolutionary AI-driven chatbot, ChatGPT, Google came up with Bard as it feared people gravitating away from its traditional Google Search engine. ChatGPT also had the backing of Microsoft. The Google response did come in the form of Bard, but its start was far from satisfactory. It had used Google’s own programming language LAMDA (Language Model for Dialogue Application).
The unveiling of this chatbot was set to take place in Paris, with a fanfare and curiosity generated through extensive advertising and the invigorating prospects at offer. However, during one of its promotional videos, Bard is shown incorrectly responding to a question. Ironically, it was outlining the discoveries of the James Webb Space Telescope (JWST) to a 9-year-old child. Critics felt that traditional Google Search could have done a better job.
Incidentally, the JWST was sent as an upgrade to the now-dysfunctional Hubble Space Telescope, to get the clearest image of the most distant point of the observable universe. That is the light from the early stages of the universe at around 100-250 million years after the Big Bang. That would help us to gain a better understanding of the universe’s cosmological parameters (including size, geometry and the mysterious dark matter and dark energy).
Google’s Bard, ironically, described the JWST as a telescope launched to observe planets outside the Milky Way. This faux pas saw Alphabet’s market cap plunge $100 billion. Google possibly realizes that in its haste to come up with an alternative, it was not tested enough. The algorithm requires extensive ML training through scores of data before being released for public usage. For now, Google has surely ended up cutting a sorry figure.