How well did the algorithms detect insightful analysis, deep understanding beyond the immediate subject matter, factual correctness, salience and an ability to write to a specific audience?
When reading articles on the Internet I look out for superficial factors including:
‣ Misuse of U+0022 typewriter double quotation marks in place of U+201C and U+201D double quotation marks
‣ Misuse of U+002D hyphens in place of U+2013 en dashes and U+2014 em dashes
‣ Poor understanding of comma and semicolon usage
‣ Choice of typeface, use of ligatures and micro-typographical features, amount of leading used…
These factors can assist with analysis of writing. An author can generate credibility in certain contexts by using triangular Unicode bullets instead of ASCII asterisks. Use of lengthy and complex words in place of simple English can also be mistaken for credibility. Analysis of these techniques will show that the writer is often attempting to belittle readers. Documentaries utilise similar tactics that the untrained eye will likely mistake for credibility. Soft filter effects, use of bookcase backdrops for expert interviews and silence have profound manipulative effects on viewers.
Grading algorithms will likely reward use of manipulative and belittling writing techniques and penalise honest superficial mistakes. I would rather read the honest opinion from an author that misuses their and there than have to carefully analyse the carefully crafted writings of a dishonest linguist.
As an aside, I recommend bookmarking The Browser[1] (Writing Worth Reading). It would be interesting to see the results of algorithmic essay grading against this curated collection of articles from across the Internet.
How well did the algorithms detect insightful analysis, deep understanding beyond the immediate subject matter, factual correctness, salience and an ability to write to a specific audience?
I'm far from well informed, but my understanding of standardised tests is that the standard specifies the algorithm, which already ignores your good points above to achieve standardised grading. All that really changes is whether a human or a robot executes the algorithm, the human insight has already been squeezed out of the system.
When reading articles on the Internet I look out for superficial factors including:
‣ Misuse of U+0022 typewriter double quotation marks in place of U+201C and U+201D double quotation marks
‣ Misuse of U+002D hyphens in place of U+2013 en dashes and U+2014 em dashes
‣ Poor understanding of comma and semicolon usage
‣ Choice of typeface, use of ligatures and micro-typographical features, amount of leading used…
‣ Misuse of their, there, they’re, where, were, we’re, affect, effect…
‣ RAS syndrome (sīc)
These factors can assist with analysis of writing. An author can generate credibility in certain contexts by using triangular Unicode bullets instead of ASCII asterisks. Use of lengthy and complex words in place of simple English can also be mistaken for credibility. Analysis of these techniques will show that the writer is often attempting to belittle readers. Documentaries utilise similar tactics that the untrained eye will likely mistake for credibility. Soft filter effects, use of bookcase backdrops for expert interviews and silence have profound manipulative effects on viewers.
Grading algorithms will likely reward use of manipulative and belittling writing techniques and penalise honest superficial mistakes. I would rather read the honest opinion from an author that misuses their and there than have to carefully analyse the carefully crafted writings of a dishonest linguist.
As an aside, I recommend bookmarking The Browser[1] (Writing Worth Reading). It would be interesting to see the results of algorithmic essay grading against this curated collection of articles from across the Internet.
[1] http://thebrowser.com/