Every PC gamer / hardware tech review forum is full of anti-AI hatred.
People want to buy a new GPU, add RAM, a new SSD, or hard drive. All of these have doubled or quadrupled in price in just a few months.
Then there are reddit threads every day where I think 30% of the original posts and comments are AI generated spam. If I see a post with emdashes or anything that ends by asking for "thoughts?" I just down vote and report as spam. I want to interact with actual humans not AI bots.
Then we see posts about AI data centers and electricity use which will lead to higher electric bills for ordinary people if demand is higher than supply.
This is ignoring all the stuff about people losing jobs.
So why should the video game playing population or even the general population be in support of AI? Of course it has uses but there are so many negatives right now it is easy for me to understand why people are already sick of it.
>Then we see posts about AI data centers and electricity use which will lead to higher electric bills for ordinary people if demand is higher than supply.
That hasn't really played out in reality. The correlation between datacenter capacity growth and electricity price growth is poor.
Yea, this is what almost everyone feels about AI, I feel. Talk to graphic designers, they say AI cannot do design but will happily use it to do programming work. Talk to game programmers, they say AI can't program/make games, yet use it to help make art assets. This hypocrisy is why it is always difficult to take AI criticism online seriously, especially when it seems that AI companies have a large number of subscribers that doesn't align with what everyone purported view is on AI.
The usual left/right haven't managed to pick a side and consume this. Maybe they still will. I don't know who would get which side though.
A cynical part of me says it's something everybody can hate. I can see both sides taking that. I can't see either side embracing it as part of the left or right identity.
Maybe more it's a conflict between those with power and those without. Like return to office, or open offices, or cubicles before that, and probably many other things back to the luddites and earlier.
The right is desperately trying to figure out how to get their commoners onboard but the only narrative they’ve got so far is: this will help us kill “terrorists. That story is ringing pretty hollow when the orange one campaigned on no new wars and they’re trying to blame AI for their decision to bomb an Iranian school.
People are increasingly hostile towards AI because they’re realizing there’s a good chance it turns out to be the most toxic and destructive thing that humanity has ever invented. The creation of modern AI may be looked back on as the worst thing that humanity has ever done — if there’s even enough humanity and enough truth left to reflect.
The public was much more likely to say AI would harm them than benefit them.
There are so many things called "AI" these days, that studies like this are basically meaningless. I think (hope) most people's views can't be reduced to a single binary question.
I think these studies aren't meaningless at all, but the fact that "AI" is a loosely used term means that many people might view even more simple ML methods with skepticism, as opposed to just, say, chat-like LLM tools.
You can't regress to the mean and call it creation. LLMs don't make novel content. This is why all the people using AI-summarizers to understand their boss's AI-expanded micromanaging emails aren't getting anything new done. Anti-compression is going to accelerate climate change.
I'm still waiting for the last ML movement to revolutionize business intelligence. Back when regression models were going to give us all forecasting. Turns out garbage in still equals garbage out and there still aren't any silver bullets. The organizations that couldn't get their act together to collect good data about their businesses for traditional analysis methods to work are shock-faced that model-overfitting writ large isn't saving them from their doofus C-suites.
I think we have lots of evidence that the single binary question "is this something people like 'us' support or not" is the only deciding factor in a lot of political decisions people make. They don't consider the facts of the particular issue and how it might impact them. They abdicate that role to whomever they believe defines what 'people like us' believe.
There is almost no messaging about how AI will benefit non-business owners/managers. It's not: "it will make your job or life easier", it's "you can be more productive so we'll ask you to do more and hire less". When computers were becoming common, the messaging was more positive and hopeful.
Big companies are diving straight into the mustache-twirling benefits "for the business" and of course people will push back.
>> Public hostility toward AI now looks stronger than ordinary skepticism toward a new technology. People have reasons for that response, including fraud, misinformation, privacy invasion, concentration of power, and job displacement. Job displacement carries its own emotional weight because it threatens status, livelihood, and social usefulness, which gives the fear an existential edge.
>>This essay explores why anti-AI sentiment may be gaining force.
The article lists off all the obvious and credible reasons why people are opposed to AI in the intro paragraph. It then spends the next 25 paragraphs advancing a very clever pet theory derived psychology about what might be going on here. While interesting in its own right, the article misses the obvious concerns that it raised in the intro paragraph.
The company whos blog it is is "AI-assisted clinical documentation" - I feel this is an attempt to explain anti-AI sentiment as an unreasonable aversion to AI rather than the real reasons for anti-AI sentiment. There's a weird trend in the AI industry to pathologize people who don't like AI.
It's not "weird", it's hostile marketing. "How do we overcome the negative sentiment we see as an obstacle in order to sell to people who don't want it, or people who will be around people who don't want it?" It's an entirely natural, commonplace, awful thing. See also "how do we market cigarettes" and "how do we maximize social media engagement" (the latter being one reason outrage gets amplified).
I find it weird because I've seen traces of it before in people who believed in the singularity 20 years ago, people who really believed that anti-AI was pathological. Back then the stakes didn't seem as real and immediate as now, and now you can see it on pro-AI reddit subs. But I agree that language and attitude is co-opted for marketing purposes, for example last year when there was a lot of talk about doomerism.
Yeah. There are many critical safety concerns, and somehow people with vested interests in AI have tried to spin that as "oh, it's astroturf marketing by the AI companies to make it seem like their products are dangerous and therefore powerful, just ignore it". Which is simultaneously trying to promote the products and dismiss the opposition. It's infuriating, and blatantly wrong, but it's also a natural consequence of "it is difficult to get a man to understand something when his salary depends upon his not understanding it"[1].
I've been thinking the opposite. It sucks to be in the generation of workers that are displaced by AI. It's going to be great to be in the generation where work just isn't something that humans are expected to do.
That's what the whole UBI thing was about though. People did see this coming and wanted to preempt it. I'm not sure whether it would've worked, but people did try to come up with solutions for this transition period.
We are never going to live in a society that doesn't expect people to work. There may not be enough work for half the population, but people will still be expected to work to live. We already live in a society that could feed every last poor person and we still choose not to, cuz "but muh tax dollars!"
I mean, assuming we don't hit some limit with AI, we're going to get to the point where the best way humans can affect productivity is to just get out of the way.
I don't think this link supports your claim. All English speaking countries in the "Opinions about AI by country" chart have 60%+ people who are nervous, in every country but Japan at least 40% of people are nervous, and there's no obvious correlation with the "trust in government regulation" data further down.
AI is a technology with the explicit end-goal of substituting energy for people. It's not intended to benefit the common man, it's intended to benefit the capital owning classes.
That has been the case since industrialization. It's not exactly new that capitalists want to replace workers with machines. The question is whether there will be new and different jobs for people or if AI and robots will take over everything this time. If that's the case, capitalism will break (nobody can afford to buy the products anymore) and a new economic system will emerge. This could be a great system for everybody or a dystopian one. My bet is on dystopian until there will possibly be a violent revolution by the peasants.
Replacing workers in specific industries is one thing. AI is trying to replace people in general. Expecting new jobs to pop up is misunderstanding the goal of this technology which is to eliminate jobs.
I find it offensive that comments that appear to be legitimate additions to the conversation are downvoted into oblivion and then flagged without even a single response to suggest where the author of the comment in question was in error. This is definitely not what I would expect to see on an ostensibly neutral platform that claims to be dedicated to technical discussion of issues on their merits.
If you're looking at the same comment I am, I suspect it just tripped AI-generated-comment heuristics, perhaps people's personal ones and perhaps the site's. It's an unfortunate world for people who like em dashes and have some genuine reason to be creating a new account.
Any discussion about AI/LLM’s/etc is incredibly complicated. I could go on and on elaborating on this, but I’m just going to leave my preface at that.
There is one thing I found to be true over and over again no matter what the anchor point is for the conversation, no matter the context, no matter someone’s sentiment, etc: nobody likes to have their time wasted.
LLM’s are incredibly useful for cutting corners. It makes it very easy to waste people’s time. No matter how useful they are, no matter the use case you have found, no matter the integration, people keep encountering bad search results and people sending them clearly LLM-generated work that wastes their time.
Unless somebody comes up with a cure for that, there will always be a significant portion of the population that is hostile to LLM’s - and rightfully so! No promise of productivity will overcome that.
TL;DR: the biggest problem with LLM’s is that it enables people to waste other people’s time.
There was always content that wastes people's time because people have always confused length and complexity with comprehensiveness and depth.
These were always poor proxy metrics for "good content," but in a lot of environments, especially professional ones, they were how work was evaluated. Naturally others used LLM to generate content that satisfies these metrics.
The slop epidemic is a consequence of what people erroneously valued for so long. Now they have it, and it's meaningless, and even if most of it was always meaningless, they can't easily tell the difference between "fluff with something meaningful" and "fluff with only fluff" anymore.
But now we're "democratizing" wasting people's time. If the AI-boosters have their way, we won't even be able to have good conversations about something as simple as the movies we saw over the weekend. It will all be "bespoke, AI-generated content." The conversations will be the equivalent of telling a story about a weird dream you had last night.
People want to buy a new GPU, add RAM, a new SSD, or hard drive. All of these have doubled or quadrupled in price in just a few months.
Then there are reddit threads every day where I think 30% of the original posts and comments are AI generated spam. If I see a post with emdashes or anything that ends by asking for "thoughts?" I just down vote and report as spam. I want to interact with actual humans not AI bots.
Then we see posts about AI data centers and electricity use which will lead to higher electric bills for ordinary people if demand is higher than supply.
This is ignoring all the stuff about people losing jobs.
So why should the video game playing population or even the general population be in support of AI? Of course it has uses but there are so many negatives right now it is easy for me to understand why people are already sick of it.
That hasn't really played out in reality. The correlation between datacenter capacity growth and electricity price growth is poor.
https://www.economist.com/content-assets/images/20251101_USC...
I want to use AI to do your job.
I don't want someone else to use AI to do my job.
I don't want to spend my attention on AI content that takes more time to consume than create.
> I don't want someone else to use AI to do my job.
This is just hypocrisy quite honestly.
In 20 years the thanksgiving dinner fights over AI equality are going to be wild.
>I'm not a bigot I support trans rights. But clankers aren't welcome in our share house.
>> OK Millennial. I'm a cyborg with 95% of my brain running in a private server.
A cynical part of me says it's something everybody can hate. I can see both sides taking that. I can't see either side embracing it as part of the left or right identity.
Maybe more it's a conflict between those with power and those without. Like return to office, or open offices, or cubicles before that, and probably many other things back to the luddites and earlier.
People are increasingly hostile towards AI because they’re realizing there’s a good chance it turns out to be the most toxic and destructive thing that humanity has ever invented. The creation of modern AI may be looked back on as the worst thing that humanity has ever done — if there’s even enough humanity and enough truth left to reflect.
There are so many things called "AI" these days, that studies like this are basically meaningless. I think (hope) most people's views can't be reduced to a single binary question.
Big companies are diving straight into the mustache-twirling benefits "for the business" and of course people will push back.
A small group of people are going to acquire immerse wealth and power from this new technology
80% of everyone else will be facing possibility of losing jobs or reduced income, if they still have job
This will be another rust belt decades, but for white collar jobs and costal states
I think the sentiment is still a valid one, and I think it's an accurate assessment.
The article lists off all the obvious and credible reasons why people are opposed to AI in the intro paragraph. It then spends the next 25 paragraphs advancing a very clever pet theory derived psychology about what might be going on here. While interesting in its own right, the article misses the obvious concerns that it raised in the intro paragraph.
[1] https://quoteinvestigator.com/2017/11/30/salary/
Disagreed. It in an attempt to paint the real reasons for anti-"AI" sentiment as unreasonable, period.
AI can help you in the near term and harm you in the long term.
I think the more people use AI the more their view shifts from the former to the latter.
Sure but that has nothing to do with long/short term.
Everything to do with have/have not.
Let's read again.
> 76% of AI experts said AI would benefit them personally, while only 24% of the U.S. public said the same.
Think 76% of financial experts said higher tax on low earners would benefit them, whilst only 24% of the public said the same.
https://hai.stanford.edu/ai-index/2026-ai-index-report/publi...
There is one thing I found to be true over and over again no matter what the anchor point is for the conversation, no matter the context, no matter someone’s sentiment, etc: nobody likes to have their time wasted.
LLM’s are incredibly useful for cutting corners. It makes it very easy to waste people’s time. No matter how useful they are, no matter the use case you have found, no matter the integration, people keep encountering bad search results and people sending them clearly LLM-generated work that wastes their time.
Unless somebody comes up with a cure for that, there will always be a significant portion of the population that is hostile to LLM’s - and rightfully so! No promise of productivity will overcome that.
TL;DR: the biggest problem with LLM’s is that it enables people to waste other people’s time.
These were always poor proxy metrics for "good content," but in a lot of environments, especially professional ones, they were how work was evaluated. Naturally others used LLM to generate content that satisfies these metrics.
The slop epidemic is a consequence of what people erroneously valued for so long. Now they have it, and it's meaningless, and even if most of it was always meaningless, they can't easily tell the difference between "fluff with something meaningful" and "fluff with only fluff" anymore.
#1? They run on an unlimited power source. Human gullibility.
Impersonal corporation which has been improving their capability to make you give up in disgust for decades jumping on the AI bandwagon? Check.
Voice recognition system that doesn't? Check.
Dunning-Kruger level responses once you finally get your voice recognized? Check.