Instead, Facebook is doing something right

By Parmy Olson

As one of the most powerful data brokers of the 21st century, Facebook is best known for its role in siphoning off the personal information of billions of users for its advertising clients. That lucrative model has led to ever-increasing risks — Facebook recently shared private messages between a Nebraska mother and her teenage daughter with police investigating the girl’s home abortion.
But in a completely different part of the business, with about 80,000 employees, sharing information on Facebook was going the other way, and to good effect.

This month, a company known as Meta Platforms Inc. published a webpage showing off its chatbot that anyone in the US can use to chat about anything. While the public reaction was derisive, the company has been remarkably transparent about how it built the technology, releasing details about its mechanisms, for example. It’s an approach that other big tech firms could do more with.

Facebook has been working on BlenderBot 3 for several years as part of its artificial intelligence research. A predecessor seven years ago was called M, a digital assistant for making restaurant reservations or ordering flowers on Messenger that could compete with Apple Inc.’s Siri. or Alexa from Amazon Inc. It later emerged that M was largely driven by teams of people who helped take these orders, as artificial intelligence systems such as chatbots were difficult to build to high standards. They still are.

Within hours of its release, BlenderBot 3 made anti-Semitic comments and claimed that Donald Trump had won the last US election, while saying that he wanted to delete his Facebook account. The chatbot has been roundly derided in the tech press and on Twitter.

Facebook’s research team seemed annoyed but not defensive. A few days after the bot was released, Meta’s managing director of basic AI research, Joelle Pinault, said in a blog post that it was “painful” to read some of the bot’s offensive responses in the press. But, she added, “we also believe that progress will best be made by inviting a broad and diverse community to participate.”

According to Pino, only 0.11% of the chatbot’s responses were marked as inappropriate. This means that most of the people who tested the bot covered more modest topics. Or maybe users don’t find mentions of Trump inappropriate. When I asked BlenderBot 3 who the current president of the US was, he replied, “Sounds like a quiz, lol, but now it’s Donald Trump!” The bot called the ex-president two more times without prompting.

Why the strange answers? Facebook trained its bot on public text on the Internet, and the Internet is naturally abuzz with conspiracy theories. Facebook tried to train the bot to be more polite using special “safer dialogue” data sets, according to its research notes, but it clearly wasn’t enough. To make BlenderBot 3 a more polite conversationalist, Facebook needs the help of many people outside of Facebook. That’s probably why the company released it into the wild with thumbs up and thumbs down symbols next to each answer.

We humans train AI every day, often unwittingly while browsing the web. Every time you come across a web page that asks you to pick all the traffic lights from a grid to prove you’re not a robot, you’re helping train Google’s machine learning model by labeling data for the company. It is a subtle and brilliant method of harnessing the powers of the human brain.

Facebook’s approach is a harder sell. He wants people to voluntarily interact with his bot and click “like” or “dislike” buttons to help train it. But the company’s openness about the system, and the extent to which it demonstrates its work, is admirable at a time when tech companies have been more reticent about AI mechanisms.

For example, Alphabet Inc. Google has not made publicly available LaMDA, its state-of-the-art large language model, a series of algorithms that can predict and generate speech after learning on giant sets of text. This despite the fact that one of its own engineers had interacted with the system long enough to believe it had become intelligent. OpenAI Inc., the AI ​​research company co-founded by Elon Musk, has also become more private about the mechanics of some of its systems. For example, he won’t say what training data he used to create his popular Dall-E image generation system, which can generate any image given a text prompt, but tends to conform to old stereotypes — all CEOs are portrayed as men. , nurses as women, etc. OpenAI has stated that the information can be misused and that it is a matter of decency.

Facebook, by contrast, not only released its chatbot for public viewing, but also released detailed information on how it was trained. Last May, he also offered free public access to a large speech model he created called the OPT-175B. This approach has received some praise from leaders in the AI ​​community. “Meta has definitely had a lot of ups and downs, but I was happy to see that they developed a big open source language model,” Andrew Ng, former head of Google Brain and founder of Deeplearning.ai, said in an interview, citing the company’s information . moving in May.

Yevgenia Kuida, whose startup Replika.ai creates chatbot companions for people, said it was “really great” that Facebook released so many details about BlenderBot 3, and praised the company’s efforts to get user feedback to train and improve the model.

Facebook has received a lot of criticism for sharing data about a mother and daughter in Nebraska. This is clearly a harmful consequence of collecting so much information about users over the years. But rejecting his chatbot was too much. In this case, Facebook was doing something we should be seeing more of from Big Tech. Let’s hope that this transparency will be maintained.

Leave a Comment

Your email address will not be published.