The most recent failure was when everyone was making fun of google search ai for a month because it was quoting blatantly wrong reddit users (who were often joking about something, and it took them seriously)
Generally I think what companies do nowadays for normal models (not the searching a knowledge base thing that Google was doing) is they train a model on basically everything and then bring it into a tuning stage with just approved texts to improve it, and then a human feedback tuning stage to improve it more
Didn’t the last AI that was trained on social media turn into a raging racist and misogynist?
And a nazi. A perfect fit for Reddit
The most recent failure was when everyone was making fun of google search ai for a month because it was quoting blatantly wrong reddit users (who were often joking about something, and it took them seriously)
Generally I think what companies do nowadays for normal models (not the searching a knowledge base thing that Google was doing) is they train a model on basically everything and then bring it into a tuning stage with just approved texts to improve it, and then a human feedback tuning stage to improve it more