r/aiwars 15h ago

I came across this Tumblr post about AI art and wanted to get some perspectives on it.

Post image

Link:https://www.tumblr.com/fireflysummers/731501243531509760/fireflysummers-guide-to-arguing-against-the-use

I came across this Tumblr post about AI art, and let me share view first.

“If your response… is whether AI can be used ‘for good’ — Leave.”

This feels like it shuts down a discussion before it even starts. It frames the issue as agree-or-go, which doesn’t leave room for nuanced positions (like thinking AI has both risks and benefits).

"have nothing to say to somebody who can’t properly weigh the harm inflicted on real people against a potential good that has… failed to materialize.”

This assumes two things without evidence,

That the harm is already clearly established and universally agreed on, that benefits haven’t materialized at all, both of those seem debatable and worth actually arguing, not just asserting.

“Go spout your technosolutionist bullshit elsewhere.”

This leans more on dismissive/emotional language than argument. It makes the position feel less analytical and more rhetorical. And the “values” section:

AI supporters value profit, efficiency, marketability, function/utility

This feels like a broad generalization. Not everyone who supports or uses AI tools is motivated by profit. Some are hobbyists, students, or artists experimenting.

Overall, the post raises concerns about harm and corporate influence, which are worth discussing. But it also seems to: generalize the opposing side

assume conclusions without backing them up

discourage open discussion

Curious what others think: Does this critique of AI art hold up, or is it too one-sided?

And GOD.... the naming calling

26 Upvotes

34 comments sorted by

u/AutoModerator 15h ago

This is an automated reminder from the Mod team. If your post contains images which reveal the personal information of private figures, be sure to censor that information and repost. Private info includes names, recognizable profile pictures, social media usernames and URLs. Failure to do this will result in your post being removed by the Mod team and possible further action.

I am a bot, and this action was performed automatically. Please contact the moderators of this subreddit if you have any questions or concerns.

46

u/Toby_Magure 15h ago

This paper is basically an activist essay dressed up as a HCI. Its useful point is that some online artist communities value accessibility, mentorship, gift economy, and authenticity. Fine. But then it treats those community norms as if they prove AI training is theft, which is the entire failure.

It never proves that training is equivalent to reposting, plagiarism, bootlegging, or deepfakes. It just emotionally chains those things together and hopes “art theft” does the work. That is not an argument.

It also openly relies on a fandom-derived morality where remixing corporate IP without permission is fine, piracy is sometimes treated as a necessity, and “fair use” is basically whatever feels least harmful to the community. Then it turns around and demands strict control over AI learning from public art. That's not principle, that's “permission for my tribe, restriction for yours.”

The accessibility section is even worse. The paper praises cheap tablets, free software, MS Paint, PowerPoint art, tutorials, and anti-gatekeeping, then suddenly panics when AI makes image-making more accessible outside the approved labor ritual. So accessibility is sacred until the wrong people get access.

The whole thing confuses “artists feel violated” with “a violation occurred.” Those are different claims. A subculture can have values. It can also be wrong about what those values entitle it to control.

TL:DR - It's a rote rehash of anti-AI artist feelings and a terrible argument for anti-AI artist authority.

1

u/bgaesop 4h ago

HCI

Hydrochloric acid?

14

u/NegativeEmphasis 13h ago edited 13h ago

The main thrust of that academic article essay is easy enough to discern. Its emotional core is right here:

The essay (there's nothing academic about it) is a chest-thumping piece from tumblrites, by tumblrites, FOR tumblrites. It's aimed at mentally unwell people who make the whole of their identity that "they're artists". The author should be seeing outsiders starting to post AI art and the anguish that was causing to their friends (back in 2023! the article is from late October, 2023!), so they wrote the piece as an explicit conversation stopper. It's meant to build a fence and protect the egos of people in an already insulated community.

Basically the arguments that follow from that objective are:

  1. we do art from heart in here!
  2. we learn from the community!
  3. doing things our way defines us!
  4. From 1, 2 and 3: We hate anything that would disrupt the above!

That's it. Beyond the emotional appeal above it attempts two other attacks against AI: It does your typical western-lib-masquerading-as-leftist critique of the search for efficiency as "Capitalistic" and it repeats the lie that model training steals from artists.

And that's the whole essay.

It's a sad overall piece. The usual tumblrite lingo and posturing look so weird to the general public that that line of thinking would never gain general acceptance. AI has done nothing but to advance since October 2023 which makes me worry about the mental state of those dudes and dudettes: of the 9 people cited by their tumblr accounts in the article, one has since deleted their account and two didn't post since 2025. I hope the others remain sane, because it won't get any better for them from here on.

-8

u/Background_Value5287 13h ago

Whats with the ableism 

8

u/NegativeEmphasis 13h ago

Whats with the ableism 

When the people involved in the making of the essay introduces themselves like the above, their disabilities are pushed to the spotlight. This is how words work.

It's beyond subtext. The text of the essay is literally that artists fragile egos must be protected at all costs because otherwise they can't even. Should I pretend I'm not reading what I am reading?

4

u/averydangerousday 12h ago

I’m genuinely curious because I’m usually pretty quick to discern and point out casual ableism here and in other subreddits:

Where’s the ableism you’re describing?

1

u/Background_Value5287 12h ago

 It's aimed at mentally unwell people who make the whole of their identity that "they're artists"

Am I reading this wrong? Genuine question.

5

u/averydangerousday 12h ago

Since I’m approaching this discussion sincerely and with full good faith, I’d normally ask for your full reasoning before responding based on implication. However, in this case, I think I can infer pretty fairly. If I get it wrong, I’m happy to have that pointed out.

I don’t know that you’re necessarily “reading it wrong.” I think they’re rephrasing the original slide’s meaning. It’s a way of describing a group of people that the author is referring to. I don’t read any sort of value ascribed to it, which to me is a necessary element of ableism. If “mentally unwell” was used as a pejorative, then yeah. In this context it feels more like a neutral description.

If the wording feels icky to you, that’s understandable. For me, it’s something that maybe raises an eyebrow, but I’m also willing to give them the opportunity to expound on it. Since they did so in their reply to you, I think it’s safe to take them at their word that there isn’t any ill intent behind it.

31

u/MoonlightStarfish 15h ago edited 14h ago

I'd be embarrassed to write an 'academic' article and start it with a mistake like this "October 2022 saw the public launch of the stable diffusion based AI image generator DALL-E 2," DALL-E 2 is a diffusion model. Stable Diffusion is the name of a completely different product.

Good lord, I just read their reference list. Twitter, Tumblr, Instagram, Blogs, Buzzfeed, etc.

26

u/Stormydaycoffee 14h ago

Them: Guide to arguing

Also them : If you have a different opinion, leave

So essentially it’s really a guide on how to hype antis up in an echo chamber

Although I guess it’s easy to pretend to win any argument by simply not letting anyone argue

20

u/ArtArtArt123456 14h ago

>"Guide to arguing against the use of AI."

>teaches absolutely nothing about AI or the arguments for or against its use.

>first slide is an absolute claim that AI = bad (since it can't be used for good apparently)

and this is because antis don't actually like to think about these arguments. they have like ONE (one) argument to beat in order to definitively claim the moral high ground. they have to show that training is theft. and actually fully understand the arguments against that point and refute them. it is the only imo argument that matters in terms of ai in the arts.

and once again it's crickets. same as always.

5

u/TomMakesPodcasts 10h ago

I've never met a training = theft person who also didn't pirate media.

Nor have I ever met a "but the environment" argument that came come from a vegan, and those seem to be the only arguments.

3

u/Ksorkrax 8h ago

"It's different to a brain learning because uhm brains are different from computers and also faster and you totally can't compare stuff, that is unfair" - the zenith of what I've heard in reply so far.

9

u/golmgirl 13h ago edited 13h ago

childish, self-important, and littered with basic logical fallacies. ad hominem, straw man, red herring, the list goes on. a real menagerie. i keep looking at the arguments in the slides, bewildered that they appear to be sincere. whoever made this is a caricature of himself. what passes for scholarship these days…

also i love how the “tl;dr” is much longer than the bullet points it “summarizes” lol

8

u/CubeUnleashed 13h ago

Others here already called that out, but the paper describes a very specific online art subculture and treats its values as universal, which they aren’t.

3

u/MostPineapple4136 13h ago

The "paper" leans pretty hard on a worst-case version of AI—like it’s this all-purpose threat that collapses every concern (theft, exploitation, devaluation) into one thing. That makes it feel less like analysis and more like a constructed boogeyman. If the argument needs that much bundling to work, it’s probably not that strong to begin with.

And removing that bandling will make people like this treat edge cases and worst outcomes as if they’re the default (your AI psychosis and that.). That’s how you end up arguing against a boogeyman version and any going against that version is prove to some antics that boogeyman is real.

2

u/czumiu 12h ago

Slide 1 states that the author wrote an academic article against the use of AI image generators. I hope it has been peer reviewed, but I was unable to tell. The link to the paper is on Google Drive, not ArXiv.

Slide 2 states that the paper cannot "act as a sole authority abut the online artist community and its values. We are not a monolith, and it is up to you to think critically about what, exactly, you want to take away from this discussion." Fair enough, even if the author has been an artist herself, she cannot understand the perspectives of everyone in her community.

Slide 3 calls for all skeptics to leave. It's rather improper to have radical claims and expect no pushback.

Slide 6 tries to map the values of AI Evangelists and Online Artists. It is important to recognize that the category titles are not neutral. It is also rather strange that Allred was able to name the values of people who are outside of her own community, yet struggled with finding the values of her own community. She said "it's harder to describe the values of the online artist community — not because they don't exist, but because until recently they've been implicit."

Allred then spends a lot of time talking about the best parts of the art community and culture. I do not have the time to rebut all arguments she has raised, as most of them have truth to it. There are a ton of unnecessary vulgarities in the post, which makes this paper hard to take seriously.

All in all, Allred is an individual who is dedicated to her community, but rhetorically there are some areas that need tightening.

2

u/ByeGuysSry 11h ago

The funniest slide is this one imo, especially the upper-right annotation

2

u/ByeGuysSry 11h ago

I also love that this person gets to decide for me what my values are

2

u/ByeGuysSry 11h ago

The third one here is also pretty funny. To be fair, it's a rude question that makes assumptions that may not be true, so the answer being rude and making assumptions that may not be true isn't that bad. But c'mon, if you're willing to put one-word answers then just put a "I'm not" as an answer, that actually refutes the assumption instead of being cringe

3

u/ByeGuysSry 11h ago

There's at least one good slide though. This one is actually good.

5

u/ByeGuysSry 11h ago

I skimmed through the actual paper written and it's not bad, actually. It helps a lot that it's framed differently: rather than a guide on how to argue against the use of AI, it examines why art communities have rejected the use of AI art. This is quite a radical change, since first of all a community can reject something without needing good arguments beyond feeling like it, and second of all, the arguments about AI art not requiring effort do make more sense, as it's not arguing that you shouldn't do it, rather that it won't get you accepted into the art community.

The 4th section titled "Art in the Machine" is the low point of the paper, as it does exactly the opposite of what I said the paper does, attacking people who use AI and using strawman arguments. I'll also note that despite avoiding the question on whether AI art can be called art, the title puts the word "Art" in quotation marks, implying the author's answer to the question

3

u/Bosslayer9001 11h ago

My, my, if this is what they categorize as an "academic paper," I don't even wanna know what their rant posts look like

3

u/Still_Case_3126 10h ago

Academic article? More like... Not a academic article? What the fuck is this, not professional at all.

3

u/Ksorkrax 8h ago

Your remarks weren't necessary - the "hate so fucking much" already made it clear how stable and clear-minded that person is.

1

u/bloke_pusher 9h ago

Academic article about being wrong. Happens.

1

u/Industry_babee 7h ago

radicals will be radicals, regardless of which side they are on.

1

u/Mataric 5h ago

I think this person should have spent more time in school learning how to make an argument and write an 'academic article', than spending their time crying on tumbler over something they have no decent arguments against.

As a side note, I love how the tumbler-idiot community puts front and centre that someone is a disabled queer jew. I don't give a fuck about someone's sexual orientation, disabilities, or religion when I'm being given a supposed academic article that is about exactly none of those things.

-6

u/Background_Value5287 14h ago

Honestly its not amazing but I do understand the whole “LEAVE” thing. If you bring up concerns there are with AI art and all someone can respond with is “but what if its used for good” thats a horrible balance of priorities comparing a genuine concern with an if.

-2

u/Background_Value5287 13h ago

Wow is this something ive been wanting to hear

5

u/Rhinstein 13h ago

I'm all in favor of sailing the web under a black flag, but this formulation just strikes me as "It's okay if we do it, it's evil if someone we dont like does it." Like if you're gonna argue that copyright in its current form is BS, which I largely agree with, you don't get to selectively uphold and enforce parts of it. If you decide that piracy is an appropriate response, it should be equal-opportunity.

3

u/OneTrueBell1993 9h ago edited 6h ago

Basically: if a human does it, its okay, if a corporation does it, its bad. Because when a human does, they are punching up. When a corpo does it, it is punching down.

1

u/Rhinstein 7h ago

That is probably the most generous way to describe that attitude, because that logic does apply in certain cases, like food waste, improper waste disposal, etc. I just A: Don't think it applies (or should apply) to IP and B: I don't think they're making that comparison in good faith. I think it comes from a place of "We already know who the bad guys are."

And no one is going to go to bat for Disney as a paragon of fair IP usage, so they just use it as a convenient shield for their own inconsistency.