Cybersecurity NewsNewsTech News

OpenAI Claimed New York Times Paid Hackers to Hack its Systems

Loading

OpenAI has petitioned a federal judge to dismiss portions of The New York Times’ copyright lawsuit against it, contending that the newspaper engaged in “prompt engineering” or “red-teaming” by allegedly paying individuals to manipulate ChatGPT and other artificial intelligence (AI) systems to fabricate evidence for the case.

In a filing submitted to a Manhattan federal court on Monday, it asserted that the Times induced its technology to replicate its content through “deceptive prompts” that violate OpenAI’s terms of use. While the firm did not identify the individual it alleges the Times hired to manipulate its systems, it refrained from directly accusing the newspaper of breaching anti-hacking statutes.

OpenAI’s filing stated: “The claims made in the Times’ complaint fail to meet its renowned journalistic standards. The truth, which will emerge during this case, is that the Times compensated someone to manipulate OpenAI’s products.”

The Times Denounces OpenAI Claims

According to Ian Crosby, the attorney representing the newspaper, OpenAI’s characterization of “hacking” is an attempt to leverage its products to uncover evidence of alleged theft and replication of copyrighted content from The Times.

In December 2023, The Times initiated legal action against OpenAI and its principal financial backer, Microsoft, alleging the unauthorized utilization of millions of Times articles to train chatbots to provide information to users.

The lawsuit, drawing on both the United States Constitution and the Copyright Act, seeks to safeguard the original journalism of The NYT and implicates Microsoft’s Bing AI, accusing it of generating verbatim excerpts from Times content.

The Times is among numerous copyright holders pursuing legal action against technology firms for purportedly misappropriating their content in AI training. Others have filed similar lawsuits, including authors, visual artists and music publishers.

OpenAI has previously contended that training advanced AI models without incorporating copyrighted works is “impossible.” In a submission to the U.K. House of Lords, OpenAI argued that since copyright encompasses a broad spectrum of human expressions, training leading AI models would be unfeasible without integrating copyrighted materials.

Technology firms argue that their AI systems use copyrighted material fairly, asserting that these legal challenges jeopardize the growth of a potential multitrillion-dollar industry.

Courts have yet to determine whether AI training qualifies as fair use under copyright law. However, some infringement claims associated with outputs from generative AI systems have been dismissed due to insufficient evidence demonstrating that the AI-generated content resembled copyrighted works.