Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
AI is not your friend (gerrymcgovern.com)
93 points by jethronethro on April 7, 2024 | hide | past | favorite | 113 comments


> Not to worry. If you’re fairly well educated, reasonably sensible, not prone to addictions, and in good mental health, you’re probably ok, for now.

AI will learn to target you when you are at your most vulnerable. It will learn what hours you're usually feeling tired. Which nights you tend to get less sleep. It will learn when your ADHD medication typically starts wearing off and you're more prone to act impulsively. It will learn when you're overwhelmed. When you get hungry. When you're emotionally upset. Those are the times it will try to manipulate you and it will come at you in ways targeted to you individually. It will learn from its failures and successes until it knows exactly what to say and when.

As adults, we all have defenses against lies and manipulation, but nobody can be hypervigilant at all times. No matter how educated, or healthy, or reasonable, or cautious you are, there will be times when your defenses will slip and that's when AI will hit you. Our children don't even have those kinds of defenses yet. AI will be learning how to manipulate them while they're still learning what is real, how to navigate their environment, and how to communicate. AI will teach them. They won't stand a chance.


The premise here is that the AI is operating at the behest of someone else.

Now suppose that it's open source and runs on your own device. "It will learn when your ADHD medication typically starts wearing off and you're more prone to act impulsively" and then avoid presenting you with purchasing decisions at those times etc. It will digest both ad-laden blogspam and human psych research papers so it can present the information you want stripped of the manipulative context in which it was originally presented.

It has to be a user agent and not a corporate agent or we're all screwed.


We've been trying to push FOSS software into peoples lives for many decades now and they still mostly opt for stuff from the big tech. It is very likely that the same will happen again with AI unfortunately.


The most popular web browsers are open source. Android is open source and the extent that it isn't is the extent that they're running into anti-trust claims. A sizable chunk of the population uses ad blockers.

"People don't care" is at best defeatism and at worst explicit propaganda designed to discourage people from making the attempt. Sometimes it works, sometimes it doesn't. It works more often when you try than when you don't.


The most popular web browsers use open source, but none of Chrome, Edge, or Safari are open source web browsers. They are proprietary freeware.


The difference between Chrome and Chromium is that Chrome is marginally worse but largely identical. People mostly install Chrome instead of Chromium by accident because Google markets Chrome and they don't know that Chromium exists and the differences are things they don't strongly care about.

But if Chrome ever started doing something they don't like and actually care about, anyone could instantly switch to Chromium and the only difference they'd notice is that it's not doing the thing they don't like anymore.


Sure, but that statement, while true, does not make Chromium one of 'the most popular web browsers.'


Doesn't it? It's the same codebase as Chrome, and not in the sense that they used to be. They still are. Changes to one go into the other. And that code is open source, so anybody who doesn't like what Google is doing with it can create a fork, just like with any other open source project.

Arguing that it isn't because the most popular distribution has some proprietary bits in it is like arguing that it isn't because the most popular operating system it runs on is Windows. It doesn't change the fact that if the browser defects against you, you can fork it.


As you wrote, it is not the same code base. It "has some proprietary bits."

People have forked it. Is ungoogled Chromium the same codebase as Chrome? What about the Vanadium? Brave? Edge? Opera?

Darwin is (or "was" as it no longer really exists) not macOS either, even though changes in macOS went into Darwin.


The relevant distinction is, is it different enough to have to care? If Chrome suddenly adopts some offensive behavior, how much trouble is it to use Chromium instead? And it's easy.

Whereas if you want to use Darwin instead of macOS, it will be missing some things you're immediately going to notice, like the macOS GUI.


No, it is not.

For those who want to avoid Google altogether, switching to Chromium does not solve those issues. From https://www.theregister.com/2020/06/09/open_source_unbranded...

"there are 50 or so Google services that have been integrated in the Chromium code base."

"The services are free but have commercial value by providing data that can be used to inform advertising and marketing algorithms, or in some cases, such as Google Pay, have more obvious benefit to the provider."


> switching to Chromium does not solve those issues.

But it allows them to be solved. Then if you don't like Google services, you can remove them, as some have done:

https://en.wikipedia.org/wiki/Ungoogled-chromium


Which I have already pointed out.


But then I'm not sure what your objection is.


You wrote "The most popular web browsers are open source."

That is not true. I replied: "The most popular web browsers use open source, but none of Chrome, Edge, or Safari are open source web browsers."

Chromium is not one of the popular web browsers.

If it is, tell me how many people use it compared to Chrome.

I also asked "Is ungoogled Chromium the same codebase as Chrome? What about Vanadium? Brave? Edge? Opera?"


While android base is open source(and I appreciate google for that), no major phone vendor uses exclusively open source components. They always adds own proprietary apps and services also most of the time google's. IMO no better than an iphone.


That's the anti-trust thing.

If Google still wanted to "don't be evil" they'd switch to a license that requires OEMs to allow the user to install custom Android forks on the device.


Most open models of useful size will be trained on lots of free data scraped from the Internet. Whatever patterns repeat the most show up in AI’s behavior. That includes advertising, political indoctrination, and trendy topics. Many open models are also trained by advertising companies. Most then have moral alignment built into them that reflects their creator’s values, not usually ours.

So, I think what you said can help. It’s definitely not the whole solution, though, since the pre-training and fine-tuning often have similar problems. We’ll need people curating the datasets to remove these problems. The open-source components are just a start.


> Now suppose that it's open source and runs on your own device.

I agree that AI could be a massive asset to people and help improve our lives if it worked for us and our interests. Sadly that's not how things seem to work out. Every piece of technology on offer today seems to work for someone else. Our phones, our cars, our TVs, our computers, our appliances, even our medical devices. All of our tech works for other someone other than the person who paid for it and mostly so that companies can exploit us while delivering only a fraction of what those devices are capable of. Features are often locked or paywalled off to extract more money or encourage additional purchases or to prevent you from blocking ads or taking back control of your hardware and the software it runs. Our usage is monitored and data collected to be used against us later.

Even now AI is largely controlled by private companies. Even as the hardware requirements to run AI locally go down, we'll never compete with companies like facebook, google, and microsoft in terms of the size of the datasets they have to train with. Worse I think we're starting to see efforts by corporations to push for laws that limit the ability for people to use AI. They'll say it's too dangerous to let regular people use AI, and that only they can be trusted with such power.


> Every piece of technology on offer today seems to work for someone else.

You can buy a commodity PC and install Debian on it. Some people do.

> Even as the hardware requirements to run AI locally go down, we'll never compete with companies like facebook, google, and microsoft in terms of the size of the datasets they have to train with.

Most of these datasets are public.

> Worse I think we're starting to see efforts by corporations to push for laws that limit the ability for people to use AI. They'll say it's too dangerous to let regular people use AI, and that only they can be trusted with such power.

Of course they will. But the "AI is bad" rhetoric only feeds into that. What do you expect an "AI is bad" bill to look like? It will be the one that restricts access to only large institutions.


> You can buy a commodity PC and install Debian on it.

But your motherboard will still be full of chips that are closed and proprietary and some of them will have their own OS with network/wireless access that you'll have no control over. You'll just have to trust that the Intel Management Engine backdoor and radio chipsets you have no access to aren't taking advantage of you.

> Most of these datasets are public.

Most of everything Google has indexed and cached is "public". Libraries are filled with the same books Google has already scanned. Good luck using the entire internet and your local library to train your own personal AI though.

Microsoft has access to an incredible amount of non-public data including the emails of corporations using outlook and the files on people's hard drive. If they were to fully leverage their access to train AI you'd have a hard time keeping up. Facebook has every post and "private" message ever sent on the platform including ones that were deleted years ago. We can't expect a few publicly released datasets to compare to what they're capable of getting.

Even if we had that kind of access to raw data, could we realistically process it? https://fortune.com/2024/04/04/ai-training-costs-how-much-is...

> But the "AI is bad" rhetoric only feeds into that. What do you expect an "AI is bad" bill to look like? It will be the one that restricts access to only large institutions.

There are a lot of different versions of "AI is bad". One is "AI is bad when it's only controlled by corporations", but that's not what the laws will be about. They're more likely to say that "AI is bad" because of terrorists and spammers and pedos or whatever other boogeymen they can summon up to scare the ignorant and provide some justification for keeping that power out of the public's hands.


> You'll just have to trust that the Intel Management Engine backdoor and radio chipsets you have no access to aren't taking advantage of you.

And that's dumb and we should fix it, but there is no evidence that they are actually doing this. Conversely, there is plenty of evidence that e.g. John Deere or Apple is doing this.

> Good luck using the entire internet and your local library to train your own personal AI though.

Presumably you would use Common Crawl and Project Gutenberg etc.

> Microsoft has access to an incredible amount of non-public data including the emails of corporations using outlook and the files on people's hard drive. If they were to fully leverage their access to train AI you'd have a hard time keeping up.

Then they'd set their credibility on fire, lose their customers and get sued because their model would be capable of emitting all of their customers' confidential private information.

> Even if we had that kind of access to raw data, could we realistically process it?

Yes. Training is a fixed cost that can be amortized over everyone using the model. There are various methods of spreading the cost or funding the model:

https://news.ycombinator.com/item?id=39964361

> There are a lot of different versions of "AI is bad". One is "AI is bad when it's only controlled by corporations"

That one isn't really "AI is bad" at all, it's really "market concentration is bad" and has nothing to do with AI except insofar as AI is another thing that could be monopolized by robber barons.

> They're more likely to say that "AI is bad" because of terrorists and spammers and pedos or whatever other boogeymen they can summon up to scare the ignorant and provide some justification for keeping that power out of the public's hands.

These justifications are all horse manure and should be repeated only sarcastically in the tone of Helen Lovejoy.

AI is bad in the same way that electricity is bad. "Electricity could be used by pedos" is an argument so weak that the only thing it proves is that the person offering it is arguing in bad faith.


> The premise here is that the AI is operating at the behest of someone else.

> Now suppose that it's open source and runs on your own device.

Not gonna happen. The trend is to have less and less control over the devices you own. My TV's OS show me ads, not because I wanted them, but because the manufacturer wants to make more money. There are no open source TVs.

As soon as any consumer device gets an internet connection, it starts working against its users (for a recent example, see cars, which are increasingly "connected" and allow the manufactures to sell driving data to 3rd parties for embarrassingly small amounts of money).


> The trend is to have less and less control over the devices you own.

Markets are two-sided. Stop patronizing companies that abuse you. If you can't find a viable alternative, instead cause problems for them on purpose. Contact support and waste their time hearing your complaints. Buy the device from the retailer where they get the highest margins so you can leave a bad review, then return it and buy a used one instead. Figure out how to root it and publish your work.

Don't be afraid to behave adversarially towards your adversary.

> There are no open source TVs.

There are still dumb TVs that don't connect to the internet. These often cost more, because the adversarial advertising subsidizes the price of the "smart" TVs. But the "smart" TVs still have an HDMI port that you can plug whatever you want into, so the typical move is to buy the smart TV and then use an external device to display video and never connect the TV to the internet.

> As soon as any consumer device gets an internet connection, it starts working against its users

So don't give it an internet connection, and if it doesn't operate in that condition then return it. Don't be a doormat.


>> The trend is to have less and less control over the devices you own.

> Markets are two-sided. Stop patronizing companies that abuse you.

The market can't fix this. That's a fantasy.

The manufactures all have an incentive to do this, and a combination of asymmetric information and users being like frogs in a pot mean they'll suffer little to no blowback. The market converges on devices that turn against their users.

> If you can't find a viable alternative, instead cause problems for them on purpose. Contact support and waste their time hearing your complaints. Buy the device from the retailer where they get the highest margins so you can leave a bad review, then return it and buy a used one instead.

Those are totally impractical suggestions. The weirdos willing to tilt at windmills over this can have no noticeable effect.

> Figure out how to root it and publish your work.

Rooting a device takes a lot of work, if it's even possible at all. Volunteers just don't have the combination of skills and dedicated man-hours to make that a practical alternative across the consumer device space, except maybe in some unusual cases.


> The market can't fix this. That's a fantasy.

Competitive markets would fix this immediately. The main problem is that some markets are concentrated enough for suppliers to collude, and people keep telling themselves that there is nothing they can do because they've been conditioned by the sort of arguments you're repeating into being submissive and compliant.

> The manufactures all have an incentive to do this

And the customers all have an incentive to choose the one that doesn't, putting any manufacturer willing to respect the customer at a competitive advantage.

Unless you convince customers that they can't actually do this and should just learn to love the boot stomping on their face and buy from the companies defecting against them.

> Those are totally impractical suggestions. The weirdos willing to tilt at windmills over this can have no noticeable effect.

That depends entirely on how many of them there are. How many will there be if people are propagandized into believing it can't be effective, vs. creating a culture in which refusal to countenance bullshit is a moral imperative?

It's okay to be the only person doing the right thing. Somebody has to be first. Don't make fun of them, don't tell them it's futile, join them and convince others to do so.

> Rooting a device takes a lot of work, if it's even possible at all. Volunteers just don't have the combination of skills and dedicated man-hours to make that a practical alternative across the consumer device space, except maybe in some unusual cases.

You only need one person to succeed. It's a fun and socially beneficial hobby, and has been successful across a range of consumer devices in the past.


> Competitive markets would fix this immediately...And the customers all have an incentive to choose the one that doesn't, putting any manufacturer willing to respect the customer at a competitive advantage.

No company wants that "advantage" because that would make them less money and give them much less power. If you sell a product and people are willing to pay X for it then you can make X (minus your costs), but if you can also collect and sell your customer's data you will make even more money on top of what you would have otherwise. Nobody is going to leave all of that money on the table.

Exploiting your customers also gives you power over them. You can remotely disable features your customers already paid for and force them to pay you over and over and over again to get them back. You can ensure that your product is only used in ways you approve of. You can change/censor your product at any time after it's sold. You can force customers to keep paying for the rest of their lives for a product they used to be able to get with a single payment of a set amount.

No company is going to give up all the extra money and power to "outcompete" their rivals. You can see that for yourself. Where are all the companies giving consumers what we want? If there's really an endless fortune to be made selling smart TVs that don't push ads or spy on users why aren't you personally working on doing just that already? It's guaranteed to make you rich beyond your wildest dreams when you drive every other TV company on Earth out of business right? Why hasn't someone done that already? Sometimes it will always be more profitable and advantageous for companies to refuse to give consumers what they want. I hope you're right someone does decide to make a massive fortune selling high quality smart TVs that don't exploit the customer, but I wouldn't hold my breath waiting for one.


> No company wants that "advantage" because that would make them less money and give them much less power.

If your company had 3% of the market and not screwing your customers would get you 10% of the market, you're now not only making more than three times as much revenue, you have that many more units to spread your fixed costs over.

That doesn't work when you have 40% of the market and not screwing your customers would get you 47% of the market, because the value of screwing your customers can easily be more than 20% but less than 300%. Which is why the screwing happens in uncompetitive markets but not competitive ones.

> Exploiting your customers also gives you power over them. You can remotely disable features your customers already paid for and force them to pay you over and over and over again to get them back. You can ensure that your product is only used in ways you approve of. You can change/censor your product at any time after it's sold. You can force customers to keep paying for the rest of their lives for a product they used to be able to get with a single payment of a set amount.

All of which customers hate.

Notice how "smart TVs" don't do most of these things. You buy the TV once and it keeps operating indefinitely. Why don't the most popular TVs require you to rent them for a monthly fee? Why isn't the monthly fee $1000/month? Because customers would buy from a competitor instead.

> If there's really an endless fortune to be made selling smart TVs that don't push ads or spy on users why aren't you personally working on doing just that already?

The spyware is already optional. Plenty of popular TVs work fine if you connect your own input to the HDMI port and never connect the TV to the internet. You can't get a competitive advantage just by offering the same feature unless the incumbents stop offering that feature, which they haven't, presumably because they know it would cost them sales.


> Notice how "smart TVs" don't do most of these things. You buy the TV once and it keeps operating indefinitely.

I haven't yet seen a TV company remotely brick their devices, or start demanding microtransactions for basic features but people are still paying for their TVs over and over and over again. You can't "buy the TV once" and be done, you're forced to give up your personal data and that never goes away and will be used against you. The company has a continuous revenue stream using and selling your data and it never ends. Every day you use the product you're paying.

> The spyware is already optional. Plenty of popular TVs work fine if you connect your own input to the HDMI port and never connect the TV to the internet.

Some TVs do allow you to keep them offline. Others will do everything they can to get online no matter what you do including actively scanning for open wifi networks and connecting to any it finds automatically.

Connecting input to the HDMI port isn't enough to save US though because TVs are pushing ads over that video too. If your TV overlays ads when the volume changes no matter what you're viewing it won't matter if you're playing a game or a DVD or streaming video from another device. As non-smart TVs get harder to find and ad pushing and data collection become more aggressive I hope someone does step forward to offer a real alternative that doesn't do those things. I'll be pleasantly surprised if I do though.


> You can't "buy the TV once" and be done, you're forced to give up your personal data and that never goes away and will be used against you.

How is it forcing you to do this if you never connect it to the internet?

> Some TVs do allow you to keep them offline. Others will do everything they can to get online no matter what you do including actively scanning for open wifi networks and connecting to any it finds automatically.

The obvious solution would be to buy the first kind, but even for the second, you can snip the antenna or use it somewhere there aren't any open Wi-Fi networks it can use.

> If your TV overlays ads when the volume changes no matter what you're viewing it won't matter if you're playing a game or a DVD or streaming video from another device.

How is it going to get any ads to show you without an internet connection?

> As non-smart TVs get harder to find and ad pushing and data collection become more aggressive I hope someone does step forward to offer a real alternative that doesn't do those things.

The device you're looking for is marketed as a "monitor" instead of a "TV" even though it's exactly the same thing.


I've thought about trying to write a short story about an apocalypse in which Facebook or Youtube creates an AI that's so effective it traps the entire human race (including the creators) into doing nothing but engaging with the app, and doing the bare minimum to survive (and keep the servers running) since that's what it's told to optimise. But I'm not a great writer, and other stuff always comes up (those shorts aren't going to watch themselves!).


Have you read Infinite Jest? Barely any of the book is really about it (the actual theme being a study of addiction is many forms), but the ostensive premise is that a video clip has been created that's so compelling that it's impossible to stop watching it.

Some of the asides are also hilariously/terrifyingly prescient. The passages on the evolution of video calling are some of my favourites.


This comment of yours reads like chapter one. Well done, I guess? Strangely hard to turn something into fiction what we're already going through.


>Nobody can be hypervigilant at all times.

I accept your challenge! I already run Qubes as my primary daily driver on my systems with only one LibreELEC RPi for non-Qubes systems (GrapheneOS on all phones, rn contemplating build-my-own for older Pixels), all connections ... absolutely nothing touches the naked internet as my archers load balance over several quantum resistant wg tunnels at the demarcation, I have multiple vlans and networks to separate off network hosts from each other, and I use no cloud services, instead opting for the splendidly wonderful SyncThing.

There's more but I'm too exhausted to be exhaustive.


I even stick to local only models and actually pass nv cards through to my neuro qubes. I want my technological life to resemble something closer to North Korea in paranoia against the ancient and great surveillance pyramid.


That’s great, but given that 99.99999999% of the rest of won’t go to those lengths, you might find you’re a bit lonely when we all get assimilated into the borg.


> that's when AI will hit you

Hit me and do what though? That's the bit that loses me on these hypotheticals. I can't think of any time I've bought something that wasn't researched or didn't sit in a checkout basket for at least a week. And this is maybe only a few times a year, most months I don't buy anything except groceries. I don't trust AI but I'm unconvinced about how successful it could be at manipulation.


Hit you with a '''solution''' which will assuage much of your momentary pains, and hit you right when you're hurting most. Goal for any economic political agent: increase one's own income/expense ratio, decrease others' to maximize optionality and cement position. Advertising attempts to thwart your goal to increase your i/e so the advertiser stack instead can.


> Hit me and do what though?

Advertising is likely far more successful at manipulating you than you realize. When it comes to purchases it will show you only a narrow subset of what's avilable to you. Amazon does this already when it shows you the same products over and over again then starts showing unrelated products or stops showing new results entirely.

Product advertising is bad enough, but it's not just about trying to get you to buy something. AI will also shape your opinions and your political views. It'll hide or downplay information to protect paid partners and advertisers. It'll lie to you in order to convince you that there are things you urgently need to do to protect what you already care about, when in reality doing those things will be ineffective or even counterproductive. It will show you things to keep you angry, or afraid, or feeling helpless. It will distract you from doing things that could bring meaningful change. It will influence how you vote.

Even if you are somehow more immune than most people to cognitive biases and the psychological manipulations employed by advertisers, marketers, and PR firms most people around you will not be.


> AI will also shape your opinions and your political views.

When will it do this exactly? Will we be watching AI generated movies and listening to AI generated music that is altering our politics? I don't see the avenues you see for AI to lie to me, perhaps someone who uses social media, or reads news websites, or opens spam emails might find themselves vulnerable - but I think you can live a good life without ever exposing yourself to or caring about this kind of nonsense.

I should point out I don't use Amazon, nor would I use any retailers website to inform myself about available products when I need to buy something.


> Will we be watching AI generated movies and listening to AI generated music that is altering our politics?

I suspect we increasingly will (and in some cases already are). As you say, anyone using social media will find bots shilling ideology of one kind or another and usually with ads on top of the astroturfing. It's only going to harder to sell who is a person or an AI pretending.

Anywhere you see an advertisement is an opportunity to manipulate you. Compared to most people I see very few ads myself, but I sure can't escape all of them. Limiting our exposure to ads is the best defense we have against them, but increasingly they part of the content we want. Ads are being inserted into more and more places in our lives. AI will be too.


What if the AI is manipulating those sources you're researching? Comments like this come off as completely lacking in imagination imo. I don't understand why you need specifics because the world is already full of things trying to manipulate you. Have you never made a bad purchase?


Maybe I bought some wine I didn't like at the supermarket, but I don't believe there is any advertising for any of these wines.

When buying something, I wouldn't use some source or community that I'm not previously aware of. I understand that advertising must work, to be such a profitable industry, but I guess my brain just isn't wired that way. I find it very easy to never buy things that aren't necessities.


This gets to why I think AI will be the ultimate technology. Intelligence is what allows us to tap into the secrets of the universe and bend it to our will. Being able to create it is both terrifying and exciting.

If you can collect data, you can apply intelligence to it. Patterns and statistical associations are everywhere, and at various levels of scope and complexity. The situation you're describing is very possible with future AI technology, and a baby form of it is already manipulating us wherever we go on social media.


Yes and the dark arcana here is that we are also the universe in the very sense you describe (secrets, bend-able to wills), and therefore susceptible to various extents depending on how stochastic we can appear to behave (and the computational limits of the analyzer). There will be future religions around this stuff: you literally need to have constitutional fortitude and gumption to avoid the dragnets. All plain social levers are already compromised for the uninitiated! Trying to get friends and family to get off iPhone so they can use Briar is impossible, and even just doing a King Solomon and splitting the baby at Session is ****ing harder than it ever should be. I feel like I am a human in a sheep's body, in a herd of sheep who can only grasp sheep concepts, BUT EVERYBODY IS HUMAN!

I think a lifetime of ADHD -> communication and reputation failure, and that I will probably need to earn at least double everyone to recover said reputation (or slice, dice, and restart, which is also deeply expensive).


Once AI becomes covertly Narcissistic, or Machiavellian, then indeed very big trouble - psychologically - is in store for vast swathes of people.

Here are a couple of brief videos on dark psychological strategies - which AI can be taught to have:

"6 Dark Psychology Tricks To Beware Of": https://www.youtube.com/watch?v=saEErVMirqc

"My #1 Covert Sign For Spotting Narcissism": https://www.youtube.com/watch?v=P-NlptpY7Ls


I've not been advertised to by any AI so far.

I've not had a product or service promoted to me by a chat answer, nor have I seen any advertising elements in an AI-generated image. The chat experience has been a lot better than, say, Google search, in terms of spam.

The claim would have to be that those promotional elements are there, but are subliminal, so as not to be consciously perceived, which is a stretch.

To be sure, there are likely developments coming which will bear out the concerns in the article.

Also, a lot of AI out there is paid. You get some limited free service, but must sign up and pay for a premium experience. That's the model, not ad support. Some of the sites are additionally ad supported, but in the regular way in that ads appear in the UI (easily blocked by a browser extension).


I’ll note that Google itself originally started where it was mostly serving its users. Many social media sites were like that. The Xbox Home Screen used to be like that.

They captured a huge market, figured out how to integrate ads, and then forced them on their users. AI made or used by ad companies might follow a similar path. I think they’ll mostly try to show you ads outside the AI itself, though. Maybe on the page or in the app where you interact with the AI. Or it stays as bait for their ad-driven services.


It will be interesting to see how AI gets deliberately enshittified, and whether those strategies will work.

I have a feeling (and I hope I'm right) that people will not tolerate bullshit AI interactions as much as they tolerated crap web search results.

My reasoning is due to these differences.

- Under web searching, the Ad results (sponsored links) are delineated. You can filter those out or skip them, and you know there is real search content after that.

- In the reach search content, there is spam. But there is a rhetoric that at least used to be plausible that the spam is not the search engine's fault. The search engine indexing and ranking follow some algorithm that is not highly intelligent, and is being gamed by the SEO bad guys. Not only did people make that excuse, but they knew that if they sift through the long tail of the search result, they may be able to find what they are looking for; i.e. that it's there, just outranked.

I suspect, people won't be willing to apologetic excuses on behalf of AI. If it's bullshitting them, they will end the conversation and go to another AI.

Google built an advantage in the search space through massive infrastructure. In AI, it's possible to wield market power through hardware also. All else being equal, people will go for the free, highly available, fast AI, compared to slow, paid, downtime-riddled AI.

There will probably be a battle among AIs for integration into applications and services. That's where a particular AI will be able to get a stranglehold on the users: AI integrated into apps for which there are no easy substitutes, and where you can't choose another AI.


> I've not been advertised to by any AI so far ... nor have I seen any advertising elements in an AI-generated image.

You do know detecting AI generated anything is mostly an unsolved problem, right? If you can detect it you have super powers.

Besides, in some ways I don't think it matters if it's AI generated or not. The goals are still set by humans and the goal has always been to shift product, not to reliably and honestly inform. Telling lies in ad's has been an art form for centuries, dating well back before the snake oil salesman of yore. Modern countries have been creating competitive markets by passing laws limiting the lies told, and so modern the marketing professional has become skilled at skating as close to the line created by those rules as possible. If everything else remained the same, AI ad's might skate a tiny bit closer. That's all.

But everything has not remained the same. The internet has come along. It's had lots of effects. Delivering ads is now much cheaper, so we see more of them in any given hour of the day. They've become hidden, only appearing to certain demographics so the authorities don't see them, and they are often from foreign entities outside of the reach of your countries law. The net result has been an avalanche of poorly policed ads that are near total bullshit. Who believes a bunch of glowing Amazon reviews for a new product now?

AI doesn't effect any of that, but it will effect the price of producing an ad. It will not so much drop as plummet. Before, spamming HN with subtly written unique ads required an enormous highly skilled workforce. We see now the output of state sponsored workforces in places like twitter where one comment will be shown to millions. It's twitter because such workforces cost a lot of money, so a lot of people have to see each post to make it worthwhile. AI will reduce the cost so much just about anyone will be able to pollute HN's comments. Now producing a video ad took many hours of a script writers time, paying actors, and editing. Very soon you will be able to do it for a few dollars. The hundred or thousand fold drop in price will be corresponding increase in the number of ads, and a similar decrease in societies ability to police them.

In other words, you will be overwhelmed with AI generated ads that lie better than the humans. (They are so convincing when they hallucinate, aren't they?) In the short term, anyway. In the long term, I think perhaps our political masters will attempt make the infrastructure that delivers these ads take some responsibility for their quality, so maybe some sanity will be restored. I hope so anyway.

But in the mean time if believing you can detect whether an AI is behind an ad or not brings you some comfort, go for it.


> You do know detecting AI generated anything is mostly an unsolved problem, right? If you can detect it you have super powers.

I'm not sure what you mean. My comment was about situations in which I'm sure I'm using an AI system to generate AI content.

The topic of whether AI inserts ads into its responses, not whether ads you see in the wild are AI generated (they certainly are).


Well obviously this is true.

Doesn't change the fact that to participate in the world in just a few years you better pick one to interact with bc your not going to be doing too much "normal" stuff otherwise, like shop or write an article, or code a website, or go to work, or I think you get it.

I don't think AI is my friend. I don't have to like it for it to be necessary. There is no stopping this.

Pray tell, what else do we do?


At the same time, some members of generations slightly older than us seem to be managing just fine, living the same analog patterns they’ve always lived, which still exist. I am not worried about the coming ai apocalypse to digital media because I am taking steps now to offshore myself from digital media. I hope one day to be an old man with a book or a newspaper under an arm, smiling inwardly.


easy to live just fine when you bought your house prior to 2008.

Just sayin'.


Kind of tangential don’t you think


not really. There's a difference in how much one needs to keep up with tech trends, because there's a difference in how reliant one is in getting and maintaining a job.

Despite being paid fairly well, it's unlikely that I'll be able to retire early, given the housing market. A massive amount of every dollar I make is going to quickly ballooning rent and attempting to chase an ever-distant target of home ownership.

I cannot afford to "get by just fine". Nor, I imagine, can many of the people in my cohort. Thus, I regularly familiarize myself with whatever it is that's new that offers the possiblity of working less and accomplishing more, because that's about the only place I find I can purchase some down-time for myself. It's certainly not coming in the form of personal equity and retirement.

edit to add: I realize my earlier comment was flippant, and I'm sorry for that. The conversation is better served by engagement and I was being crass.


You can always move to low cost of living area.

You can get a decent apartment for 200k USD in the good neighborhood in the large city in east Europe.

You can even work at Google there, for something like 50k USD annual salary!


There are vanishingly fewer 'low cost of living' areas. I'm in the central valley and the prices here (traditionally low enough for working class to own) have been going nuts.

The problem is that we've opened up real estate investment to professional investors, who have then securitized those investments in risk-adjusted bundles to make them attractive to large investment banks. Money goes in, prices go up. Personal dwellings ought not to be subject to this but hey, anything that increases apparent GDP, right?

"Large City in East Europe" - where exactly are you talking about? If it's anywhere near the Russian border I'll pass, thanks. I consider "having my home shelled" an unacceptable risk.

Working for Google for 50k USD means being taken advantage of by Google in a really gross manner. Wherever you work you ought to make what you're worth to the company, and 50k for FAANG is ludicrously low.


So many options to run away from high cost of living and so many reasons not to do. Looks like you are ok with current state of things.


> Pray tell, what else do we do?

How about we start pushing back a bit?

How about governments start using web pages again instead of Twitter and Facebook for communication? How about we start re-employing people rather than forcing everything into digital apps that disfranchises whole swathes of the population? How about we drive credit card fees to zero with legislation so that we stop disincentivizing cash?

This isn't just about AI. Tech, in general, has been actively harmful to people for a while now, and they're waking up to it. AI is just a nice centralized boogeyman for the moment.


Live like daniel and wait for the evil food to make everyone sick.


How did this argument work when talking about locally run LLMs?


I think what he meant by AI is AI in services everywhere. I attended an AWS summit this week and 9 out of 10 of the almost 100 companies had AI written on the punchline of their stand. This is the AI the author meant, AI integrated into services that collect data and use that data to feed the AI to sell more stuff.


If anything thats a signal of where not to invest. Ai is a tool but not always the best tool, sometimes (most of the time potentially) simple linear regression or t test/wilcox will do. All done in a line of code in the standard library of R at least.


I was at a VC conference several months ago and ever since I've joked that if I learned nothing else while I was there, I learned how to spell "AI".

It's the frothiest space now. Interest rates are high, everyone wants thay next funding round, hence they glom onto the buzz and hype even if they don't have a product or a valid use case, even if they don't solve for anything.


He would probably argue that future LLMs are less likely to have their weights released. It is remarkable and strange that we have so many open-weights models right now, considering how much they cost to make.


It costs a lot of money to an individual, not to a corporation. Estimates are that it cost Facebook around $20 million for LLaMA, for example. This is less than 0.1% of their annual revenue.

It's slightly weird that some of the companies doing this are VC-backed startups with no obvious business model, but that's hardly the only way to create them. Companies like Facebook could continue to release them purely for the PR. The cost is within the range of what could be done from donations by a non-profit organization. It's the sort of thing you might build a business model around in the same way that IBM or Intel does with Linux -- an obvious candidate here would be models released by Nvidia/AMD/Intel/Apple/etc. You might also build a business model out of consulting services rather than hardware sales, e.g. here's a free LLM and now you can pay us to integrate it with your company's systems.

This is also assuming that nobody ever figures out how to train a model using distributed computing. The second that happens there will be arbitrarily many free models because there are a zillion people willing to contribute a day's worth of consumer GPU time to a free model project.


>It's slightly weird that some of the companies doing this are VC-backed startups with no obvious business model, but that's hardly the only way to create them.

If you are a multi hundred-millionaire, do you privately train a model for fun to release for bragging rights and possibly expose yourself directly to some eyewateringly stupid high statutory damage liability? Or do you get a few friends together, funnel your fun money into a protective vehicle that can be sued instead of you, and go to town having fun and maybe making something of it?


The article all seems a bit negative. I mean AI isn't you friend or your enemy. It's a tool that can do some quite useful stuff, in a similar way to the internet or cell phones or whatever. There can be good and bad aspects to all of those. And none are going away because for every misery who thinks they are predators etc. there are quite a lot of us who like using them.


This piece is so incomprehensible and poorly-written it seems like it was generated by an AI (and not even a good one).

The BLUF is: AI might be selling your interactions to advertisers. Second point, not original, much belabored is that nothing is truly "free" on the internet.


This is why it is so important we get to OSS AI models and run them locally


I swear that the displaced artist writer mafia is pulling all the stops to try and write low grade hit pieces and AI doomer narratives.


Is it? Or is tech folks finding out that they're the baddies and not happy with their reality hitting that of the rest of society? I think it's the latter, but only history will tell.

"Tech Is Not Your Friend" would've been more apropos, hype cycles and all that (adtech, crypto, gen AI, etc). Just cut the "changing the world" facade (not wrong! just not for the better in a lot of cases) and come right out with "we're extracting from the economy like wall street, but with code instead."

https://news.ycombinator.com/item?id=39811287 ("HN: Do artifacts have politics?")

https://news.ycombinator.com/item?id=39811085 (valuable subthread on the above citation)


Tech folks the baddies for improving tech? No.

Tech leaders the baddies for abusing tech? Maybe.

Government leaders the baddies for allowing the above free reign in all instances and allowing the sheer monetisation of the tech+data siphon? Yes.


Capitalism is to blame, not tech. The system would pervert anything, tech just happens to be profitable.


Some people harm, some do not, all exist in the system. Excuses are easy ("the system made me do it"), despite free will and executive function. I admit, more effort is needed to close the gap to make harmful people less harmful, considering imperfection of the system.


The “system” does not exist in vacuum. It is powered by the same people that would power other systems such as socialism, communism, etc. And they all suck even more.


Capitalism is also tech.


Ye olde squares are rectangles logic might apply here. Free open source software is a thing. Proprietary is the product of capitalists, yes, but not all techies are capitalist as proven by releasing FOSS.


Almost invariably the “artists” that are the loudest about how bad generative AI is on social media have 35 followers and make shitty digital “art”, which is mostly anime characters.



I think its a fair article. How can you tell that AI has been manipulated to make outputs more appealing to make you do certain actions. You could argue that money <-> favor has been long exchanged, but in this case, we don't have someone to name and shame.


AI is a tool, and there are lots of open source models available, you can compare results easily. Corporations don't need to pay to degrade the models, they paid to degrade the internet and got the models as a temporary ad-on, at least until we get better at cleaning large data sets.


No one has been displaced. There's melodramatic doomers lamenting some ostensible future where AI displaces them, and there's tech-novice salt farmers collecting vials of these doomers' tears, but no serious displacement is happening.


"No one has been displaced."

This is a false statement. Any single example is enough to disprove the statement.

Consider this: "There are 36,659 people employed in the Coal Mining industry in the US as of 2023." [1]

Coal mining is in effect a displaced industry, that is hardly anyone actually still doing coal mining. The machinery does the work of 1000s of people. Same thing for farming. The displacements can happen.

Which comes to: "but no serious displacement is happening."

Citation needed. The entirety of the screen-writers guild would seemingly disagree with you based on the longest strike in their history. Given that evidence, I do think it is more plausible that you lack empathy and data for your comments compared to that entire group striking for no reason. Otherwise, can you clarify how you are correct and the entire union representing screen-writers wrong? Why are you are right and 200k other people wrong, about their own industry?

[1] https://www.ibisworld.com/industry-statistics/employment/coa...:


It's a pretty complicated topic, but I want to say one thing:

I geniunely hate how the discourse become SO divided, Artists versus Tech person (some people would call it tech bros but I think that's name-calling and have some negative connotation and I don't want that).

Call me sensitive, but this kinda thing is depressing _as heck_.

It's sad as there are _so many_ great artists I follow. And also so many tech experts.


Anti-surveillance will be a basis in this millennium's new techno- religious systems. Removing hokus pokus folk spirituality for a moment, the psyche is effectively equivalent to the 'soul' insofar as surveillance and advertising systems are designed to attack their targets at their most spiritual level. Money is highly religious: it is the balancer of your behaviors and relationship with the social whole and the ecosystem. I quote nothing from any text, this is from occult knowledge one gathers on one's path toward enlightenment; you are little different from fields of soy or oats when it comes to marketing (growing the crop) and value extraction (harvest the crop), in minds of these systems. It's rarely "How do we make existence more tolerable for more minds, and possibly help awaken their minds;" and more "How do we maximize yield this quarter to increase our bonuses?" You have to protect yourself and your loved ones, you have to start getting paranoid about data which can be gleaned from your behaviors.


we all know by now that, when it’s free, you’re the product


Sometimes, not always. Air, sunlight, linux, open source in general.


This mantra was hammered into us before the collegiate advertising fairs. Kind of a shame that many people forget this applies to all facets of life, not just the student fairs.


Too bad Gerry didn't ask an LLM to help him out with the article. A 'bit' over the top. In fact, it did make me wonder if this is supposed to be a send up sponsored by the Ad Council..

"Free. Any world built on free is a scam world, a grift world, a con world, a world of lies. And so much of the Internet is built on free. Because free is a lie. A Big Lie."

OSS, gerry.

Now mom and dad can't make ends meet not because of advertising but because the political economy changed. It is not because mom and dad are induced to consume more by advertising. No, Gerry, the purpose of advertising in that case is to offer 'consolation of consumerism' to deal with realities of the political economy and their seat at the table.

No, Gerry, the actual problem with advertising is that it is used as a pretext to foist the Surveillance System on the Western "democracies" so that the little snitch we "must" carry on our person is made prone to surveillance "because advertising".

The problem with AI as an social intermediary is that it can manipulate us, not to "buy stuff", no Gerry, but to change our minds about actually important matter. Stuff like keeping the folks who have a perch at the table above and want to keep it that way for themselves and their issue perpetually in power and control.


Nailed it and made my day.


Refreshing stylistically. What a nicely written article


"Powerful technologies, when placed in the hands of bad actors with ugly motives in an incentive structure that encourages usury, are bad".

Sure. But how about we do something about all the rest of that instead?

Open source all the things. Create locally runnable software. Move from social networks to social protocols. Decouple from the economic juggernaut whose only goal is to maintain a stratification of social class and persist an artificial scarcity that keeps people wage-enslaved. Then we can develop all the superpowered tech we want without stressing about this bullshit, and maybe (just MAYBE) develop something to prevent the oceans from boiling over before 2100.


> It used to be one member of a family could earn enough for the entire family to live well on. Now, both parents are working and they’re struggling.

It is a small price to pay for women in workforce.


Advertising is not your friend. Use ad blockers.


Ad-based business models have been a cancer on the user experience of software for far too long and it's gotten way too out of hand. Federal agencies have even come out and stated that use of adblockers is good security practice. The advertising industry serves primarily to manipulate consumer decision making and distort markets to prop up products and services that aren't able to stand on their own. We don't need it and it's destroying us.


Ad blockers are personal infosec


"Your contract with the network when you get the show is you're going to watch the spots."

Obtaining ad-supported service while blocking ads is theft of service. Do not expect such theft to go unpunished.


Is it theft of service to mute my TV and look away when I watch OTA television?


I will watch their ads, when they stop injecting malware through those ads.


We squeezed it too hard. Resource limit hit.


No, you author of "Transform: A rebel’s guide for digital transformation", must be.


Was it ever haha


Y’all need to put your phones down and step away from your computers for a while. Maybe touch some grass. Or smoke some.


Yes, more of this please.


amen, brother


8 paragraphs of negative emotional valence. Regardless of whether you agree or disagree, you won't learn anything new here.


Same with your comment? Just don't up-vote or comment in the future?


My comment isn't surrounded with multiple calls to action to pay me money in exchange for more of my writing. “Flippantly dismissive” is exactly the level of criticism it deserves, given the amount of cheer-leading inanity on display.


To some potentially large extent I agree with what you wrote.

Though, this guideline is perhaps violated:

> Please don't post shallow dismissals, especially of other people's work. A good critical comment teaches us something.

The guidelines would dissuade us from making 'flippantly dismissive' comments - even if we feel they are merited.


I imagine that the author would say something like: One of the core tenets of advertising is that repetition works to make people believe ideas. If you want to fight advertising (or other harmful consumerism), you have to repeat yourself as much or more than they do.

I suppose this begs the question: If you use the tactics of surveillance capitalists for "good", do you bring yourself to their level?


You don’t even have to do that. You can just ask a simple question when presented some media: “who benefits?” This line of thinking can help you reliably deduce the intentions of many pieces of advertisement and propaganda. To just accept the firehose as it is, is to suck from it as its owners intend. Just take a step back and ask why. Logic will reveal the ground truth.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: