Images and the technology used to create, transmit, view, analyze, and modify them are in constant flux.
Cell phone screens from almost every manufacturer have gained support for the DCI-P3 color space, an upgrade which allows them to display a 25% larger color gamut than the previous
sRGB standard. Many users, however, are responding to a wave of reporting on the digital attention crisis by manually setting their displays to be grayscale, deciding that cell phone distraction is related to chromaticity.
Meanwhile, researchers in AI labs around the world are capitalizing on the falling price of consumer GPUs to advance breakthroughs in image processing: images are enlarged without losing detail, celebrity photos have their facial expressions manipulated and changed completely, and computers can now recognize complex objects in images and describe them in full sentences. Images now perform without an audience. When these AI technologies are scaled up and deployed by large companies to analyze the data of hundreds of millions of users, the magic behind the algorithms is often revealed to be a vast amount of cheap, expendable, overseas human labor, searching, tagging, filtering, and moderating an endless stream of raw content.
The images too are changing. As camera manufacturers find it harder to eek out optical improvements in hardware, they turn to digital post-processing. The iPhone Portrait Mode sets a precedent by constructing in software at least as much of the final photographic image as it actually observes from its light sensor. And those images most likely to be thought of as artificial – computer renderings – have gained new accuracy and fidelity as the film and gaming industries embrace ray-tracing to simulate the path of light rays as photons bouncing around a scene of physically-realistic materials.
What’s to be made of this new visual culture inhabited by image technologies we increasingly do not understand?