If you’ve ever tried to read a science journal article and suspected that it was 100% computer-generated, unintelligible, horse crap, there’s a good chance that it is. It’s not that you’re just dumb and you don’t understand the subject matter. That could never be the case. The paper must just be nonsense.
Earlier this week, Nature revealed that scientific journal publishers Springer and IEEE are both removing over 120 published papers after discovering that every single one is nothing more than fancy-sounding gibberish. The fairly egregious oversight was discovered by French computer scientist Cyril Labbé, who’s spent the past two years cataloguing the collection of computer-generated bullshit.
I knew it! And you know who’s trying to use these articles, bad teachers. I know how they operate. They see the title for an article and add it to your assigned reading list without first reading it themselves. And when they do actually read it themselves they don’t understand it either because there is no sense to be made of it. So they figure it must be genius and their students must read and report on it. Then when the reports make no sense either everyone fails. Except me, I just made stuff up and wrote it down. I didn’t read the article. Sorry for throwing off the curve.
Hit the jump for an example.
In recent years, much research has been devoted to the construction of public-private key pairs; on the other hand, few have synthesized the visualization of the producer-consumer problem. Given the current status of efficient archetypes, leading analysts famously desires the emulation of congestion control, which embodies the key principles of hardware and architecture. In our research, we concentrate our efforts on disproving that spreadsheets can be made knowledge-based, empathic, and compact.