To AI or not to AI: Where do we draw the line?

Levina de Ruijter

Administrator
Staff member
Joined
24 December 2024
Posts
11,129
Likes
19,100
Name
Levina
Image Editing
  1. No
I just came across a few images that were "interpreted by AI" and it made me stop in my tracks. My immediate response was that I much rather not have any "AI interpreted" or "AI enhanced" or "AI anything" images on the board. A dedicated AI thread would be different. Actually I think we already have one somewhere. And if not we can create one.

Or is AI the new normal and it can't be stopped, especially on phones with its filters and what not that are most all using AI now? And what about Photoshop with its AI tools? If we use those tools then where do we draw the line? Where does AI stop being a handy tool for simple tasks like subject selection, removing distracting bits from our images or correcting distortions? We did that before ourselves, manually, and it was a PITA and time consuming. So AI is most helpful here. But where does AI start to basically replace what we shoot in camera? When does it turn an image into a lie?

What use of AI is legitimate? What use is not?

Where do you draw the line?
 
OK, you have pushed my button. Boy do I have a lot to say about AI.

First, "AI" refers to a ton of stuff. Everything from ChatGPT checking your spelling and syntax before you send off an email, to a program writing a paper for you for a HS or college course. AI can mean providing you advice about a recipe. Or doing a quick review of people who've posted on an issue to offer a summary. Or it can mean providing "facts" (from the current weather to who won the 2020 US Presidential election). It can mean sharpening a photo or deleting some noise. Or it can "age" a portrait (so you can take that photo of you from 10 years ago and age it for your Tinder or Linkedin account) Or it can mean taking a clothed woman you shot and making her nude (ie: her clothes disappear). Or putting the head of someone else on to the body of a someone engaged in an explicit sex act with a horse. Or it can mean creating a completely different photo that has never existed (like putting Donald Trump on the slopes of Mt. Vesuvius as it erupts). This is not criticism of you Levina--I praise you for raising this subject. But as a society/world we are guilty of just talking about "AI" in this broad context when we need to be more specific. It's like saying "let's talk about 'technology'" or some other overly broad category. If you use Photoshop or Lightroom or even modest editing programs, you probably use AI. But it's not the same level of AI and the same purpose as creating a completely new photo that you didn't photograph or didn't exist before you started putting in prompts.

Second, I hate AI that uses prompts to create new pictures (I won't call them photographs though they may end up as jpegs) or create something that never existed versus you just used an editing program to refine the photo (ie: sharpening the details versus putting in a completely different head or background). I hate it because (a) those backgrounds and faces and details were acquired by the AI program by "scraping" the work of others and then not crediting it. Imagine if someone used your photos to generate something that made money--with no credit to you or no compensation--but they just changed the hair color and face of the model in the photo so it wasn't "stealing." But they still used your work to create that AI-generated visual.

Additionally (b) it's not a photo (ie: a reality taken by a photographer and then processed or edited to delete or emphasize particular parts of the composition). It's like adding a completely new background. Or surrounding someone with a crowd when they were by themselves. I get this is slippery (you make the model thinner or reshape her nose vs. making her appear completely different--unidentifable to the original person/photo)--where do we draw the line on edits? But basically I see a difference between an edit vs a creation. AI can generate art (just like it can generate music or poems). But something that is a jpeg is not always photography--there had to be a basis in some form of reality that the photographer shot. Maybe he used light painting or she had special effects or the edit removed distracting objects. Maybe there was photo stacking to get a sharper image. But the photo wasn't created by prompts. Or by combining the work of 300 different photographers. And that's what AI with photography does.

And (c) it wasn't created by the artist primarily by a camera. It may be another form of artwork, but it's not photography. Let me explain. I can play a violin solo. Or I can use a keyboard to imitate the sound of violin. Or I can use my computer to generate the sound of a violin playing that music (not a recording but AI generating the same piece with the same dynamics as the one involving the actual violinist.) Is it all music? Yes, just like visual art created by photos or AI or by a paint brush is all visual art. But some of that art was created by a single person using a camera and then perhaps some editing. While others was created by a computer using an AI program and pulling work from a learning program that looking at the work of hundreds of other artists. That's a substantial difference in my regard that goes to HOW the art was created.

Finally, I'll offer (d): if AI becomes acceptable to generate photos (rather than being regarded as an AI visual product), it means we can't trust photography. To Levina's point, we have to regard all photography as a potential lie then. Let me give you an explicit example using a visual creation not produced by me. This was on Donald Trump's "Truth Social" account (so supposedly posted by Donald Trump) and supposedly showing what a devote Christian he is, praying in church. This is a close-up of the picture--look at the fingers!
Screenshot 2026-02-27 at 9.45.05 PM.webp

Yeah, six (5 plus a thumb) fingers on his right hand--clearly AI generated. And he's trying to pass it off as a photograph. We have to draw the line at visual depictions that were created by AI (rather than a photo that was just edited or refined with AI's help) otherwise where we're left is that no photo can be regarded as real but maybe just someone who is good at a keyboard.

So let me stop my venting and try to directly answer your questions (which are good ones). i think a photography site has no place for AI-generated visual art. It may be art but so are paintings and sculpture and performance art--they're al visual art. We're a photography site.

So where do we draw the line? I think if you use AI to enhance an aspect of a photo (I use Topaz a lot to remove noise, add sharpness, smooth a face), that's legit in my opinion I took the photo, I'm not adding the work of others. If you looked at my original photo and the final edits, they'd look pretty similar. I think when we use AI to add a completely different background (putting in a magnificent sunset, adding a bull elephant for dramatic effect), that is BS--it is taking the work of others that they aren't credited for and inserting in my photo. When you use AI to generate visual art (by means of prompts), that's not photography.

What about special effects? For instance, I've submitted photos to this site that involve light painting. I'm playing with double exposures and hope to submit some of those as well. But in all cases, the content comes from me (I was the photographer), no prompts were used, not work from anyone else, not AI generated or stock photography. Regardless of what editing tools I used, the initial content started with something I generated using only my camera.
 
Last edited:
Indeed, AI refers to many different things.

I use ChatGPT and Google Gemini quite a bit. Mostly for writing text, like when I was reorganising the forum and needed meta descriptions of 140 characters max. And also with renaming forums and threads. I gave them prompts and they would give me 3 or 4 possible new titles. They are rarely the ones I ended up using but they did give me new ideas. So it was helpful.

I also consult them when I am looking for information about something. I can find that information myself by using a standard search engine but ChatGPT or Gemini or even Duck AI does it in a fraction of the time so it saves time.

But now images.

I use AI only for things that I can do manually too and always did do manually but that AI simply does faster: one click and my subject is selected. One little stroke and the bit of dirt on the ground is gone. Nothing I couldn't have done myself with the selection tool or the clone tool. But this is where I draw the line for myself.

If e.g. I had a perfect bird portrait but the head was tilted the wrong way I would not "enhance" the image by having AI replace that head with a correctly tilted head. But I would use the AI remove tool to remove a distracting twig somewhere as that is what I could and would have done without AI too, using the clone tool.

In terms of people: I have never given an AI a prompt to create a human being or an animal or whatever. Most of the time it's easy to tell if a person was AI generated. Hands are still a giveaway, exactly like in the Trump picture: 5 fingers and a thumb. And most of the AI generated images are too slick.

I'm sure AI can be used to create art. And as long as it is made clear that an image was created with the help of AI, I'm fine with it. My problem lies in people doing so and not telling. Therein lies the lie for me. But the question is how we humans perceive such works.

A while ago somebody posted a song on Instagram that they had written. It was a song in which the singer spoke in Jesus' name. I'm not religious at all but even I was moved by it. It was a very powerful song, a powerful voice. It went viral on Instagram and YouTube. And then it turned out the singer used AI. That sort of killed the song and the whole vibe about it. The song lost its impact the moment people realised it "wasn't real". And yet it was, the words were still there, the voice was still there. But just the knowledge it was AI totally changed the experience. I see it in AI generated videos on IG as well. Videos about cat shenanigans and it's great fun, and then somebody in the comments points out it's AI and that takes all the fun away. It's an interesting phenomenon that the moment we know something is AI generated it changes our perception of it; what we enjoyed when we didn't know, we no longer enjoy when we do know.

As far as the forum is concerned, I would like to keep it clear of AI generated stuff. But not sure it can be done.
 
before AI was so heavily entrenched in our work (and its just getting started!) some PJ got into a lot of trouble for some photoshop work he did on a photo of his. i dont remember all of the details but he ADDED or SUBTRACTED something from the image submitted to his agency (or boss or whomever). i have mixed feelings on that because in the film days, ANYONE who spent any time printing their own work, would or could crop their images to remove some distracting part of the image, whether it was a messy background that was visible from one side, or a tree/building/car/object on the side or top or bottom of the image that was bothersome. enlarge the image just a wee bit, slide the easel the appropriate direction, insert paper and print.

AI can now do that (sometimes better, sometimes not nearly as good) as using the heal/clone tools in LrC. i suppose theyre called the same things in Ps but i dont know how to use that software.

if a landscape was shot in the blue hour or early morning, whether it on film or digitally, there is almost NEVER enough dynamic range to capture detail in the shadows, without blowing out the sky so film shooters would expose for one or the other and then dodge or burn (depending on what the print needed) with their hands, or small masks, or what have you. in Lr you can select the radial or linear gradient mask and perform the same thing. with the AI functions in LrC, all one has to do is select (for example) "sky" and the computer will detect what it considers the sky. sometimes its spot on, sometimes it leaves too much halo/shadow around non-sky items, sometimes it misses completely. the same thing is being accomplished as in the film days. just much easier and quicker.

ive used AI remove to remove people way in the back of an image that distracts from the subject that was 50-60 yards closer to me. thats not, in my opinion, being untruthful or deceitful. ive not altered the subject or the surrounding area to the point of making either look like a lie. ive used AI also to remove hotspots on glass, that is not incidental to the subject matter.

i DONT like (not that i can do anything about it) the generation of an image strictly from descriptors or from them and a base photo to create something that's not there. at least not when it comes to people. want to make an image of a plane, train, or automobile (sounds like a forum :brgrin:) then knock yourself out but please dont pass it off as a real item.

there was a trend about 2 weeks ago on FB to upload a photo of yourself, and have meta do an AI 'cartoon/caricature' of you in your workplace. yeah, most were pretty cute. i didnt do it though in part because of the altruistic sense that if you truly wanted something like that, they should have paid a graphic artist (a real person) to draw it for you and because im not quite talented enough to figure out how to do it. mostly the 'use a real artist' part....



wow. i must need to talk to someone because ive written more here then i have in a month...
 
I just came across a few images that were "interpreted by AI" and it made me stop in my tracks. My immediate response was that I much rather not have any "AI interpreted" or "AI enhanced" or "AI anything" images on the board. A dedicated AI thread would be different. Actually I think we already have one somewhere. And if not we can create one.

Or is AI the new normal and it can't be stopped, especially on phones with its filters and what not that are most all using AI now? And what about Photoshop with its AI tools? If we use those tools then where do we draw the line? Where does AI stop being a handy tool for simple tasks like subject selection, removing distracting bits from our images or correcting distortions? We did that before ourselves, manually, and it was a PITA and time consuming. So AI is most helpful here. But where does AI start to basically replace what we shoot in camera? When does it turn an image into a lie?

What use of AI is legitimate? What use is not?

Where do you draw the line?
I would not like A.I. images at all (they're not photographs, after all), nor any of the A.I.-generated additions to photos (replacement skies, features that were not in the real scene, etc.).

My view is that replacing human ability, creativity, decision-making, etc., with A.I., is a road to disaster; and we need to stop, before that disaster happens. Avoiding the use of A.I. in photography is a small step in the right direction.
 
I would not like A.I. images at all (they're not photographs, after all), nor any of the A.I.-generated additions to photos (replacement skies, features that were not in the real scene, etc.).

My view is that replacing human ability, creativity, decision-making, etc., with A.I., is a road to disaster; and we need to stop, before that disaster happens. Avoiding the use of A.I. in photography is a small step in the right direction.
I agree!

It's tricky though. Take sky replacement. People have been doing that manually for well, forever, I guess. And not finding fault in that. And one could argue that if an ugly sky ruins an otherwise beautiful landscape e.g. then replacing that sky with something that matches that beauty is legitimate; it doesn't change the landscape itself. Now AI can do it too and so people use it. I myself never replaced a sky as I always thought it was crossing a line. I'd rather go back to shoot the landscape again at the right time, with the right light. Or just accept the sky as is, editing it as best I could.

Is that being purist? Is sky replacement in itself permissible? Done manually or using AI?
 
I agree!

It's tricky though. Take sky replacement. People have been doing that manually for well, forever, I guess. And not finding fault in that. And one could argue that if an ugly sky ruins an otherwise beautiful landscape e.g. then replacing that sky with something that matches that beauty is legitimate; it doesn't change the landscape itself. Now AI can do it too and so people use it. I myself never replaced a sky as I always thought it was crossing a line. I'd rather go back to shoot the landscape again at the right time, with the right light. Or just accept the sky as is, editing it as best I could.

Is that being purist? Is sky replacement in itself permissible? Done manually or using AI?
There is an element there with AI that is important to consider. AI (when you're using prompts) is creating images based on the photos of hundreds (or thousands) of other photographers, none of who are being credited in your photo (or by the AI program). If you (Levina) manually replace the sky or clone out 3 people in the background to create a different looking image, you're not taking the work of other photographers. You may be altering the depiction of reality. But it's all your work (you just didn't do it "in camera").
 
before AI was so heavily entrenched in our work (and its just getting started!) some PJ got into a lot of trouble for some photoshop work he did on a photo of his. i dont remember all of the details but he ADDED or SUBTRACTED something from the image submitted to his agency (or boss or whomever). i have mixed feelings on that because in the film days, ANYONE who spent any time printing their own work, would or could crop their images to remove some distracting part of the image, whether it was a messy background that was visible from one side, or a tree/building/car/object on the side or top or bottom of the image that was bothersome. enlarge the image just a wee bit, slide the easel the appropriate direction, insert paper and print.

AI can now do that (sometimes better, sometimes not nearly as good) as using the heal/clone tools in LrC. i suppose theyre called the same things in Ps but i dont know how to use that software.

if a landscape was shot in the blue hour or early morning, whether it on film or digitally, there is almost NEVER enough dynamic range to capture detail in the shadows, without blowing out the sky so film shooters would expose for one or the other and then dodge or burn (depending on what the print needed) with their hands, or small masks, or what have you. in Lr you can select the radial or linear gradient mask and perform the same thing. with the AI functions in LrC, all one has to do is select (for example) "sky" and the computer will detect what it considers the sky. sometimes its spot on, sometimes it leaves too much halo/shadow around non-sky items, sometimes it misses completely. the same thing is being accomplished as in the film days. just much easier and quicker.

ive used AI remove to remove people way in the back of an image that distracts from the subject that was 50-60 yards closer to me. thats not, in my opinion, being untruthful or deceitful. ive not altered the subject or the surrounding area to the point of making either look like a lie. ive used AI also to remove hotspots on glass, that is not incidental to the subject matter.

i DONT like (not that i can do anything about it) the generation of an image strictly from descriptors or from them and a base photo to create something that's not there. at least not when it comes to people. want to make an image of a plane, train, or automobile (sounds like a forum :brgrin:) then knock yourself out but please dont pass it off as a real item.

there was a trend about 2 weeks ago on FB to upload a photo of yourself, and have meta do an AI 'cartoon/caricature' of you in your workplace. yeah, most were pretty cute. i didnt do it though in part because of the altruistic sense that if you truly wanted something like that, they should have paid a graphic artist (a real person) to draw it for you and because im not quite talented enough to figure out how to do it. mostly the 'use a real artist' part....



wow. i must need to talk to someone because ive written more here then i have in a month...
I'm sure there's a 12-step group for that sort of thing.
 
There is an element there with AI that is important to consider. AI (when you're using prompts) is creating images based on the photos of hundreds (or thousands) of other photographers, none of who are being credited in your photo (or by the AI program). If you (Levina) manually replace the sky or clone out 3 people in the background to create a different looking image, you're not taking the work of other photographers. You may be altering the depiction of reality. But it's all your work (you just didn't do it "in camera").
That's a good and valid point and something to keep in mind. Because when we say AI does this or does that, it doesn't really do anything. It just taps into data collected from the internet, from countless images of other photographers. AI doesn't create. It copies, it reuses. And yes, it doesn't give credit.
 
I agree!

It's tricky though. Take sky replacement. People have been doing that manually for well, forever, I guess. And not finding fault in that. And one could argue that if an ugly sky ruins an otherwise beautiful landscape e.g. then replacing that sky with something that matches that beauty is legitimate; it doesn't change the landscape itself. Now AI can do it too and so people use it. I myself never replaced a sky as I always thought it was crossing a line. I'd rather go back to shoot the landscape again at the right time, with the right light. Or just accept the sky as is, editing it as best I could.

Is that being purist? Is sky replacement in itself permissible? Done manually or using AI?
I suppose it partly depends upon whether one considers the real scene as what is to be portrayed (albeit with minor enhancements, like saturation, contrast, etc.), or merely treating it as the starting point for a work of art. My guess would be that most photographers probably veer towards the former (I certainly do).
 
I suppose it partly depends upon whether one considers the real scene as what is to be portrayed (albeit with minor enhancements, like saturation, contrast, etc.), or merely treating it as the starting point for a work of art. My guess would be that most photographers probably veer towards the former (I certainly do).
Sure. But if AI is used to create such a "work of art" it would be nice to know because that changes the landscape somewhat (pun intended), at least it does for me.

This thread also got me thinking about filters and presets and where they fit in. I've used some in the past, just presets in Lightroom or Photoshop, mostly for B&W images. But recently I've started to use filters and presets for my iPhone snaps for the games, to make them look a bit better, using apps like Photoscape (on the Mac) and Wholegrain (on the iPad). That is not AI but it's something superimposed on an image. Something that enhances an image. Something not caught in camera. Would we consider filters and presets legit editing tools? And do we have to say we applied one when posting an image?
 
... manually replace the sky or clone out 3 people in the background to create a different looking image, you're not taking the work of other photographers. You may be altering the depiction of reality. But it's all your work (you just didn't do it "in camera").
the example i used for the sky, isnt even replacing it. just selecting it so i can then pull back the exposure or highlights as needed. i can use the AI tool to select the sky, or i can use the brush tool to select it. its still all my work. just as if i had printed it from a negative that i exposed.
 
Sure. But if AI is used to create such a "work of art" it would be nice to know because that changes the landscape somewhat (pun intended), at least it does for me.

This thread also got me thinking about filters and presets and where they fit in. I've used some in the past, just presets in Lightroom or Photoshop, mostly for B&W images. But recently I've started to use filters and presets for my iPhone snaps for the games, to make them look a bit better, using apps like Photoscape (on the Mac) and Wholegrain (on the iPad). That is not AI but it's something superimposed on an image. Something that enhances an image. Something not caught in camera. Would we consider filters and presets legit editing tools? And do we have to say we applied one when posting an image?
Good points. I do use software filters and presets sometimes (e.g. film sims or various colour filters in B&W shots) but they are not A.I.. I treat them as a normal part of post processing. I think that, if we had to declare them, then, to be consistent, we'd probably have to declare the whole editing process, which would become tedious very quickly.
 
in reference to what?
Sorry, that was meant to be humor. You had closed your (very good) post by saying "wow. i must need to talk to someone because ive written more here then i have in a month" and so I was attempting to joke about 12-step programs for dealing with various "problems." Which of course is silly given how long my response to Levina was.
 
Back
Top Bottom