[go: up one dir, main page]

CNET Is Reviewing the Accuracy of All Its AI-Written Articles After Multiple Major Corrections

Big surprise: CNET's writing robot doesn't know what it's talking about.

We may earn a commission from links on this page.
Stock image of robot hand typing
Artificial intelligence can generate text—but the technology hasn’t yet learned to be accurate.
Image: kung_tom (Shutterstock)

Aside from stringing together human-like, fluid English language sentences, one of ChatGPT’s biggest skillsets seems to be getting things wrong. In the pursuit of generating passable paragraphs, the AI-program fabricates information and bungles facts like nobody’s business. Unfortunately, tech outlet CNET decided to make AI’s mistakes its business.

The tech media site has been forced to issue multiple, major corrections to a post published on CNET, created via AI, as first reported by Futurism. In one single AI-written explainer on compounding interest, there were at least five significant inaccuracies, which have now been amended. The errors were as follows, according to CNET’s hefty correction:

  • The article implied a savings account initially containing $10,000 with a 3% interest rate, compounding annually, would accrue $10,300 in interest after a year. The real earned interest would amount to $300.
  • An error similar to the above showed up in a second example, based on the first.
  • The post incorrectly stated that one-year CD accounts’ interest only compounds annually. In reality: CD accounts compound at variable frequencies.
  • The article mis-reported how much a person would have to pay on a car loan with a 4% interest rate over five years.
  • The original post incorrectly conflated APR and APY, and offered bad advice accordingly.
Advertisement

For more than two months, CNET has been pumping out posts generated by an artificial intelligence program. The site has published 78 of these articles total, and up to 12 in a single day, originally under the byline “CNET Money Staff,” and now just “CNET Money.” Initially, the outlet seemed eager to have its AI authorship fly under the radar, disclosing the lack of a human writer only in an obscure byline description on the robot’s “author” page. Then, Futurism and other media outlets caught on. Critique followed. CNET’s editor in chief, Connie Guglielmo, wrote a statement about it.

And just like the outlet’s public acknowledgement of its use of AI only followed widespread criticism, CNET didn’t identify nor aim to fix all these inaccuracies noted on Tuesday, all on its own. The media outlet’s correction only came after Futurism directly alerted CNET to some of the errors, Futurism reported.

Advertisement

CNET has claimed that all of its AI-generated articles are “reviewed, fact-checked and edited” by real, human staff. And each post has an editor’s name attached to it in the byline. But clearly, that alleged oversight isn’t enough to stop artificial intelligence’s many generated mistakes from slipping through the cracks.

Advertisement

Usually, when an editor approaches an article (particularly an explainer as basic as “What is Compound Interest”), it’s safe to assume that the writer has done their best to provide accurate information. But with AI, there is no intent, only the product. An editor evaluating an AI-generated text cannot assume anything, and instead has to take an exacting, critical eye to every phrase, world, and punctuation mark. It’s a different type of task from editing a person, and one people might not be well-equipped for, considering the degree of complete, unfailing attention it must take and the high volume CNET seems to be aiming for with its AI-produced stories.

It’s easy to understand (though not excusable) that when sifting through piles of AI-generated posts, an editor could miss an error about the nature of interest rates among the authoritative-sounding string of statements. When writing gets outsourced to AI, editors end up bearing the burden, and their failure seems inevitable.

Advertisement

And the failures are almost certainly not just limited to the one article. Nearly all of CNET’s AI-written articles now come with an “Editors’ note” at the top which says, “We are currently reviewing this story for accuracy If we find errors, we will update and issue corrections,” indicating the outlet has realized the inadequacy of its initial editing process.

Gizmodo reached out to CNET for more clarification about what this secondary review process means via email. (Will each story be re-read for accuracy by the same editor? A different editor? An AI fact-checker?) However, CNET didn’t directly respond to my questions. Instead, Ivey Oneal, the outlet’s PR manager, referred Gizmodo to Guglielmo’s earlier statement and wrote, “We are actively reviewing all our AI-assisted pieces to make sure no further inaccuracies made it through the editing process. We will continue to issue any necessary corrections according to CNET’s correction policy.”

Advertisement

Given the apparent high likelihood of AI-generated errors, one might ask why CNET is pivoting away from people to robots. Other journalistic outlets, like the Associated Press, also use artificial intelligence—but only in very limited contexts, like filling information into pre-set templates. And in these narrower settings, the use of AI seems intended to free up journalists to do other work, more worthy of their time. But CNET’s application of the technology is clearly different in both scope and intent.

All of the articles published under the “CNET Money” byline are very general explainers with plain language questions as headlines. They are clearly optimized to take advantage of Google’s search algorithms, and to end up at the top of peoples’ results pages—drowning out existing content and capturing clicks. CNET, like Gizmodo and many other digital media sites, earns revenue from ads on its pages. The more clicks, the more money an advertiser pays for their miniature digital billboard(s).

Advertisement

From a financial perspective, you can’t beat AI: there’s no overhead cost and there’s no human limit to how much can be produced in a day. But from a journalistic viewpoint, AI-generation is a looming crisis, wherein accuracy becomes entirely secondary to SEO and volume. Click-based revenue doesn’t incentivize thorough reporting or well-put explanation. And in a world where AI-posts become an accepted norm, the computer will only know how to reward itself.

Update 1/17/2023, 5:05 p.m. ET: This post has been updated with comment from CNET.

Maybe AI-Written Scripts are a Bad Idea?
Subtitles
  • Off
  • English
Maybe AI-Written Scripts are a Bad Idea?