Companies Just Learned a Brutal Lesson About Training AI to Do Human Jobs

Share This Post



A dismal job market has given rise to a grim new cottage industry: a buzzy San Francisco-based AI company called Mercor is hiring desperate job-seekers to train AI models to do the work they can’t get hired for anymore.

The company has been recruiting educated and underemployed experts while keeping them fully in the dark about whose AI they’re even training. As New York Magazine reported last month, shifts are also crushingly long, the vast majority of managers are young and inexperienced, and contracts often end abruptly without any prior warning.

Now, companies that hired Mercor — which include OpenAI and Anthropic, according to NYMag‘s reporting — have learned a rude lesson: Mercor revealed late last month that it had been hacked, again shedding light on Silicon Valley’s extremely fragile and contractor-dependent AI supply chain.

The startup told TechCrunch that it was affected by an exploit linked to an open source project called LiteLLM. A sample of data allegedly stolen from Mercor reviewed by the publication included material referencing Slack data and videos purportedly showing conversations between Mercor’s AI systems and its hired workers — meaning that the theft very likely exposed sensitive information from the companies that hired Mercor to train their AI systems.

“We are conducting a thorough investigation supported by leading third-party forensics experts,” a Mercor spokesperson told TC. “We will continue to communicate with our customers and contractors directly as appropriate and devote the resources necessary to resolving the matter as soon as possible.”

The situation is looking bleak. Contractors have since filed five lawsuits against the startup, as Business Insider reported last week, accusing it of violating data privacy and consumer protection laws. The suits allege Mercor could’ve leaked highly sensitive data, including Social Security numbers or addresses, to bad actors.

While it’s not uncommon for companies to be sued following major data leaks, the latest development once again highlights the dangers of relying on an army of underpaid and overworked contractors to train extremely valuable AI models.

Mercor’s corporate clients are clearly nervous as well. Meta has officially pausing all work with Mercor during its own investigation into the security incident, as Wired reported earlier this month.

However, it’s likely not for any concerns over the wellbeing of the gig workers who are being exploited. The biggest worry for companies like Meta or Mercor is losing their competitive edge by exposing the ways they train their AI models to other AI labs.

It’s far from the first time Mercor has fallen foul with the extensive line of highly educated workers it relies on. Even before the latest hack, Mercor was hit with three class-action lawsuits over the past seven months, per NYMag, with plaintiffs accusing it of relying on independent contractors, who have little to no agency at the company, let alone insight into the work they do.

In November, contractors also accused the startup of firing them, only to be offered work on a different project — but at a much lower hourly rate.

More on Mercor: AI Companies Are Treating Their Workers Like Human Garbage, Which May Be a Sign of Things to Come for the Rest of Us



Source link

spot_img

Related Posts

Microsoft VP releases free Windows tool inspired by macOS feature

Summary created by Smart Answers AIIn summary:PCWorld highlights...

Gemini 3.1 Flash TTS: New text-to-speech AI model

Today, we’re introducing Gemini 3.1 Flash TTS, the...

Access Denied

Access Denied You don't have permission to access...

The Deepfake Nudes Crisis in Schools Is Much Worse Than You Thought

Nevertheless, there are clear patterns that appear. In...

Flipkart, Uber partner to offer SuperCoins on rides

Ecommerce major Flipkart and ride-hailing platform Uber on...
spot_img