![](/static/253f0d9b/assets/icons/icon-96x96.png)
![](https://fry.gs/pictrs/image/c6832070-8625-4688-b9e5-5d519541e092.png)
The year 2000 was peak human technology. It’s been downhill in every way since, until generative AI - which is f’in amazing. But let’s be real, the future belongs to the bots.
The year 2000 was peak human technology. It’s been downhill in every way since, until generative AI - which is f’in amazing. But let’s be real, the future belongs to the bots.
Ideogram, make my post an image.
<Ideogram> Sure thing boss!
Claude, re-write my post.
<Claude> Sure thing boss!
Generative AI fascinates me, and I use it frequently for both work and leisure. This photo, however, is the first to make me feel a twinge of sadness, as it reminds me that I now have to question the authenticity of everything I see.
Then I recall that even traditional photographs often undergo extensive post-processing, which somewhat alleviates my concern. To distract myself, I create an AI-generated video of the river flowing, momentarily forgetting my worries.
However, I can’t help but notice that many of these AI generations are of poor quality. Generative AI is simultaneously wonderful and troubling - a double-edged sword of technology.
Despite these mixed feelings, I must admit this is a beautiful photo.
I love generative A.I. I use it all the time, professionally and recreationally. This photo is the first to make me sad that I have to question the veracity of everything now. Then I remember the amount of post-processing a photo like this usually goes through and feel less sad. Then I make an AI video of the river flowing and forget what I was worried about. But all these generations are kinda trash. Such wonderful, terrible things. Beautiful photo, though.
/stares in smart glasses
WebP is a raster graphics file format developed by Google intended as a replacement for JPEG, PNG, and GIF file formats. It supports both lossy and lossless compression, as well as animation and alpha transparency. Google announced the WebP format in September 2010, and released the first stable version of its supporting library in April 2018.
The format has spotty support across applications and some vulnerabilities were discovered that required patch efforts last year. It’s not clear why you should do anything.
deleted by creator
Humans are really bad at determining whether a chat is with a human or a bot
Eliza is not indistinguishable from a human at 22%.
Passing the Turing test stood largely out of reach for 70 years precisely because Humans are pretty good at spotting counterfeit humans.
This is a monumental achievement.
As long as no one messes with their open source contributions… (ditto for MS)
To the one person who upvoted this: We should be friends.
Aye, I’d wager Claude would be closer to 58-60. And with the model probing Anthropic’s publishing, we could get to like ~63% on average in the next couple years? Those last few % will be difficult for an indeterminate amount of time, I imagine. But who knows. We’ve already blown by a ton of “limitations” that I thought I might not live long enough to see.
Participants only said other humans were human 67% of the time.
On the other hand, the human participant scored 67 percent, while GPT-3.5 scored 50 percent, and ELIZA, which was pre-programmed with responses and didn’t have an LLM to power it, was judged to be human just 22 percent of the time.
54% - 67% is the current gap, not 54 to 100.
Thank you, I seldom see my own thoughts laid out so clearly. As a practitioner of the Dark Arts (marketing), this union of commerce and art is a foul bargain. I think it’s time the two had some time apart to work on themselves.
Did you watch Breaking Bad?
Rabbit season
Our Fall is quite autumnal.
Honestly, I’d get on-board with just about anytime 2000 to 2010. The enshittification of the internet and social-media-driven comment culture didn’t start in earnest until smart phones took off.