Show newer

"Believe none of what you hear and half of what you see" may have been overly generous.

Turtles should be the only certifiable emotional support animal because of their extended lifespan.
If your emotional support animal dies, what do you do? Seek out the mythical 10x pet?

Today's lesson:
It's not cargo-cult programming -- It's just accounting for a bunch of weird shit.

This infographic the Wall Street Journal published in 2013 to illustrate the crushing burden of taxes on ordinary people tells you everything you need to know about who the Wall Street Journal thinks qualifies as “ordinary people”

Low power mode: disengage!
Low bandwidth mode: engage!

Dispatcher on scanner: "We need you to load your <unintelligible> into lotus notes"
Cop: "fuckthat, man"

There is no Доверяй, there is only проверяй

When the content is put together by an ESL person (or someone uneducated, or with a poor grasp of language, or with some kind of disability), there are still subtextual elements that clue you in that there is a human mind on the other side. Cohesive reference points, subtexts, and contextually relevant indicators of emotion or opinion.
But these things aren't easily codified.

Show thread

I'd like to see a neural network trained to identify AI-generated content versus human-created content. That would be optimal.

It seems like the biggest giveaways are 'context switching' in the middle of a paragraph (or sentence, or even phrase!).

Also, nonsensical grammar and incongruent participles (or other structural elements, for lack of a better description) provide artifacts that a human generally wouldn't think up.

Show thread

Nonsensical grammar and incongruent participles (or other structural elements, for lack of a better description) provide artifacts that help identify fraudulent machine-generated content.
This seems like it would become difficult as there are varying levels of... linguistic aptitude that still 'have a real mind behind it'.

Fake content is a threat to cognition, another 'reality influencer', as it requires processing load on the viewer to quantize whether this piece of content they are spending the time reading is _actually worth a damn_ before determining whether to move on or not.
A lot of times folks don't notice these kinds of time wasters until they aggregate to enough lost time that someone notices and says "wtf!"

Show thread

Except, machine-generated content isn't just a technical solve. Prior 'aggressive' web technologies were just that -- specific technical techniques or components that could be easily identified (by another) computer program.

I fear that this is going to continue to pick up momentum until it becomes a substantial threat to information-seekers, reducing the signal-to-noise ratio substantially. This happened with pop-up ads, clicktracking, real-time bidding, and all of the other underhanded Web-mechanisms that require the end-user to take a defensive approach to browsing.

Show thread

I am increasingly seeing what I suspect to be ML-generated bullshit content pop up in search results; Not just tarpits / honeypot advertising pages, but whole sites (or even networks of sites).

In another twenty years, we will probably come to have 'always known' something similarly 'shocking' about spacetime that some of us have suspected all long.

Show thread

Twenty-plus years ago, the mainstream looked at talk about things like Echelon and MKULTRA and quickly dismissed the idea as being conspiracy theories. Until it because accepted as public knowledge.

Show older

Open instance in the spirit of netizenship. Cyberpunk leaning, tech-forward, available to the public; I provide a lot of services that no one but me really uses, just 'cause.