AI can write a passing college paper in 20 minutes - ZDNet
( February 24, 2021; ZDNet )
The writing prompts were given in a variety of subjects, including U.S. History, Research Methods (Covid-19 Vaccine Efficacy), Creative Writing, and Law. GPT-3 managed to score a "C" average across four subjects from professors, failing only one assignment. The AI scored the highest grades in U.S. History and Law writing prompts, earning a B- in both assignments. GPT-3 scored a "C" in a research paper on Covid-19 Vaccine Efficacy, scoring better than one human writer.
Overall, the instructor evaluations suggested that writing produced by GPT-3 was able to mimic human writing in areas of grammar, syntax, and word frequency, although the papers felt somewhat technical. As you might expect, the time it took the AI to complete the assignments was dramatically less than that required by human participants. The average time between assignment and completion for humans was 3 days, while the average time between assignment and completion for GPT-3 was between 3 and 20 minutes.
What if the Steemit web site had some sort of simple captcha requirement when posting, and appended a verification code to the article - either as a comment or inside the post - in order to prove that at least a human hit the "Post" button? (Definitely Not Google's horrendous Recaptcha... more like the ones used by Brave or Maxthon/LivesToken.) That still wouldn't deter copy/paste though. Other thoughts?
Read the rest from ZDNet: AI can write a passing college paper in 20 minutes
Does this equally tell us something about the status quo in college level grading methodology?
Good point! Is the AI that good, or are the graders that bad (or disinterested)?
Also, I was thinking after I posted that it might be easier to differentiate between a human and an AI if you had multiple works to examine. Maybe the AI can get past a single paper, but can it do it consistently if the grader sees ten papers? Or a hundred? Also, maybe it's possible to put AI detection tools into the hands of graders.
On a platform like Steem, I'd expect to see particular themes and unique writing styles emerge repeatedly over time from a human author. I'm not sure if we'd see that from an AI, so that might be something that an effective curator could look for.
On another note, I was wondering if curators should even care if an article is written by a human or not? If the article's job is to draw readers, then maybe that's all the voters should care about... whether or not it attracts an audience? I'm not sure what I think about that argument.
Tonight's link is related: The value of your humanity in an automated future | Kevin Roose - TED
"I'd expect to see particular themes and unique writing styles emerge repeatedly over time from a human author. I'm not sure if we'd see that from an AI"
On the other hand, a bulk of submissions by a natural language AI might have an all-too-obvious tell; perhaps trending artifacts, including themes that act as a giveaway. I wonder if: 1) a sort of anti-aliasing would be a useful addition to the algorithms producing papers (if not already a feature), and 2) if that would serve as a successful way to mitigate the risk of recognition, if indeed as per your example a professor would be likely to grade less on par with human submissions for AI papers.