Yahoo Web Search

Search results

  1. 3 days ago · Most notably, Microsoft's Twitter Chatbot Tay, Google's Gemini and OpenAI's Sora (text-to-video converter) have courted controversy over the years for giving bigoted, racial and gender ...

  2. 3 days ago · So far, generative AI has been mostly confined to chatbots like ChatGPT. Startups like Character.AI and Replika are seeing early traction by making chatbots more like companions.

  3. 2 days ago · A prominent first example was Tay, a chatbot launched unsuccessfully twice by Microsoft, which was discontinued even faster the second time than the first. What we can learn from this: lower the bar, focus on trivial AI functions and then it will work.

  4. 4 days ago · Way back in 2016, Microsoft released a then-exciting chatbot called Tay but swiftly killed the service when it began declaring that “Hitler was right”. Join the conversation.

  5. 4 days ago · An example of data poisoning was Microsoft’s launch of Tay, an AI chatbot on Twitter. Shortly after its launch, trolls exploited its vulnerabilities, causing it to make mean and hurtful remarks online. Microsoft had to deactivate Tay within just 24 hours.

  6. 5 days ago · Microsoft, Google and start-ups including Silicon Valley-backed Sarvam AI and Krutrim — founded by Bhavish Aggarwal of Indian mobility group Ola — are all working on AI voice assistants and...

  7. 3 days ago · An older example of data poisoning a model was observed with Tay, the Microsoft chatbot, which became so vile and racist that it had to be taken offline after only 16 hours of being in service. These examples point to a few potential harms AI-powered learning assistants present if adults do not closely monitor their use.