[ad_1]
New Jersey, June 25 (talk) Using artificial intelligence to create art is nothing new. It’s as old as artificial intelligence itself.
What’s new is that there are now a range of tools that allow most people to generate images by typing in text prompts. You just write “a landscape in the style of Van Gogh” in the text box, and the AI ​​will follow the instructions to create a beautiful image.
Read also | An earthquake measuring 4.5 on the Richter scale struck Kazakhstan.
The power of this technology lies in its ability to use human language to control the generation of art. But can these systems accurately represent the artist’s vision? Can bringing language into artistic creation really lead to artistic breakthroughs?
Engineering output
As an artist and computer scientist, I have been working on generative artificial intelligence for many years, and I think this new type of tool limits the creative process.
When you write text prompts to generate images with AI, the possibilities are endless. If you’re a casual user, you’re likely to be satisfied with what artificial intelligence generates for you. Startups and investors have poured billions of dollars into the technology, seeing it as an easy way to generate graphics for articles, video game characters and advertisements.
In contrast, an artist may need to write a prompt like an essay to produce a high-quality image that reflects their vision—with the right composition, the right lighting, and the right shadows.
That long prompt isn’t necessarily a description of the image, but often uses a lot of keywords to invoke the system in the artist’s head. There’s a relatively new term for this: instant engineering.
Basically, the role of the artist using these tools is reduced to reverse engineering the system to find the correct keywords to force the system to produce the desired output. It takes a lot of effort, a lot of trial and error, to find the right words.
Artificial intelligence isn’t as smart as it looks
To understand how to better control the output, it’s important to realize that most of these systems are trained on images and captions found on the internet.
Think about what a typical image caption says about the image. Captions are often written to complement the visual experience in web browsing.
For example, the title might describe the name of the photographer and copyright holder. On some sites, such as Flickr, the title will often describe the type of camera and lens used. On other sites, the title describes the graphics engine and hardware used to render the image.
So, to write useful text prompts, users need to insert many non-descriptive keywords for the AI ​​system to create corresponding images.
Today’s artificial intelligence systems are not as smart as they seem. They are essentially intelligent retrieval systems with huge memory and work by association.
Artists Frustrated by Lack of Control
Is this really the kind of tool that can help artists create great work?
At Playform AI, a generative AI art platform I founded, we conducted a survey to better understand artists’ experiences with generative AI.
We gathered feedback from over 500 digital artists, traditional painters, photographers, illustrators and graphic designers who have used platforms such as DALL-E, Stable Diffusion and Midjourney.
Only 46 percent of respondents rated such tools as “very useful,” while 32 percent found them somewhat useful but were unable to integrate them into their workflow. The remaining users (22%) thought they were not useful at all.
The main limitation highlighted by artists and designers is lack of control. On a scale of 0 to 10, with 10 being the most in control, respondents described their ability to control outcomes between 4 and 5. Half of the respondents found the output to be interesting, but not high enough quality to be useful in their practice.
When it comes to whether generative AI will impact their practice, 90% of the artists surveyed think it will; 46% think the impact will be positive and 7% predict it will have a negative impact . 37% felt their practice would be affected, but were unsure how.
The best visual art transcends language
Are these limitations fundamental, or will they disappear as technology advances?
Of course, the new version of the generative AI will give users more control over the output, as well as higher resolution and better image quality.
But for me, as far as art is concerned, the main limitation is fundamental: it is the process of using language as the main driving force for generating images.
Visual artists, by definition, are visual thinkers. When they imagine their work, they usually draw from visual references rather than words—memories, photo collections, or other works of art they come across.
When language dominates image generation, I see an additional barrier between the artist and the digital canvas. Pixels can only be rendered through the lens of language. Artists lose the freedom to manipulate pixels outside semantic boundaries.
There is another fundamental limitation of text-to-image technology.
If two artists enter the exact same prompt, the system is less likely to generate the same image. It’s not because of what the artist did; it’s because of what the artist did. The different results are simply due to the AI ​​starting with different random initial images.
In other words, the artist’s work comes down to chance.
Nearly two-thirds of the artists we surveyed were concerned that their AI generation might resemble the work of other artists, and that the technology would not reflect their identity — or even replace it entirely.
The question of artist identity is crucial when making and recognizing art. In the 19th century, when photography became popular, there was a debate about whether photography was an art form.
A court case in France in 1861 decided whether photography could receive copyright protection as an art form. This decision depends on whether the artist’s unique identity can be expressed through photographs.
The same question arises when considering AI systems that use images available on the internet for teaching.
Before text-to-image prompts, creating art with AI was a much more involved process: Artists typically trained their own AI models on their own images.
This allows them to use their own work as a visual reference and retain more control over the output, which better reflects their unique style.
The text-to-image tool can be useful for some creators and everyday casual users who want to create graphics for work presentations or social media posts.
But when it comes to art, I don’t see how text-to-image software can adequately reflect an artist’s true intent, or capture beauty and emotional resonance, or engage the viewer and make them see the world anew. (dialogue)
(This is an unedited and auto-generated story from a syndicated news feed, the latest staff may not have modified or edited the body of content)
share now
[ad_2]
Source link