Monday, December 18, 2023

The Allegory of AI - History of Photojournalism Final Project by Madeline Jacyszn



My set of illustrations were intended to comment on how I don't believe images generated by AI can qualify as photojournalism or even can replace photojournalism. In my first image, a photojournalist is documenting an event, caught in a moment of checking on their camera before resuming shooting while the people around them continue to protest. The second image is of a computer monitor, a representation of AI, "watching" the events but is unable to understand them or absorb them in full detail, hence why the people are mere silhouettes as opposed to the fully colored versions in the first image.


The inspiration for the premise of my illustrations was frustration at AI image generator users often trying to draw a false equivalency between photography in general and AI image generators, a video posted by Vox called "Why AI Art Struggles with Hands", and Plato's Allegory of the Cave.

The frustration with AI image generator users comes from my personal experience. When challenged on their views, a lot of AI image generator users try to bring up photography which extends to photojournalism when they want to justify their use and the increasing prevalence of AI image generators. Broadly it is a ludicrous point, but for photojournalism especially, AI simply cannot replace the function of photojournalists. Maybe they can approximate a "photo", but the AI cannot do the "journalist" part. Even if it generates an image that is on the subject of a current event, that it was generated in of itself is not photojournalism. The ethics of photojournalism are strict on just editing images because authenticity is important for the viewership of the photos to trust that the events are actually happening as they see it in the photographs.

In "Why AI Art Struggles with Hands", at 1:43 the speaker Phil Edwards says. "All the [AI] has to learn from are the pictures..." to help understand why AI struggles with such things like rendering hands properly. The jist of what he's saying is that the images AI trains off of are good at helping it spot patterns, but the AI doesn't understand the object in a 3D space.

That concept of AI being unable to understand objects in a 3D space off of just images is what inspired me to represent the AI in what is essentially Plato's Cave. This is because in the original allegory, the people chained to the wall who had never seen the outside world only saw shadows cast on a cave wall, which limited their information and understanding but they didn't have the knowledge to understand that. It is like the AI in that way, where the AI's shadows cast on the wall are the images it is fed to be trained off of, but it can't go beyond that and "leave the cave", so to say.

************************************************************************************

Madeline's Midterm Project on Red-Tagging is not to be missed.  From her introduction:

Red-tagging is when a journalist has been branded as a member or an associate of an enemy party of the government, leading to blacklisting and harassment of the journalist in the hopes of stifling their work. This topic is of interest to me because red-tagging is a problem in the Philippines, my mom’s country, where photojournalists will be branded as terrorists or communists so the government has a public excuse to kill them. 

No comments:

Post a Comment