FG Spreadshirt Swag
Page 2 of 2 First 12
  1. #11
    This is a bit of a necro post, but I am curious given recent developments in AI in the creator market and in the Corporate sector given the statements from Wizards of the Coast, Paizo, and various court rulings, etc...
    Quote Originally Posted by Mephisto View Post
    I recently heard an interesting podcast about the matter from a large computer mag over here in Germany. Basically, the AI is looking at thousands and millions of pictures, but they generate them based on their neural network. It's like asking someone to look at a Picasso and ask him to recreate one.
    Kinda sorta yes and no, from my experience of having done some work as an artist to clean up AI-generated artwork, it's more like a very very very very complex cut & paste system. The AI is not drawing the picture from scratch as an artist does. Elements are chopped up from the data sets the AI was trained on, and the AI fills in with material extrapolated from the dataset.

    The problem I have is a lot of the Data Sets AI's have been trained on, using material that was pirated from artists around the world. For example, I read an article about how Midjourney scraped DeviantArt for a lot of its training data set. Which means some of my old comic art trained Midjourney. The New York Times lodged a court case against another AI platform that scaped the New York Times for its training data set.

    When I was doing some work for a gallery that was making AI-generated poster art using old Dutch Masters as the theme, I had to constantly erase the signatures of the long-dead artists as the AI had worked out that all artwork should have a signature in the bottom right of the painting, so the AI would copy-paste any old signature into that space.

    I sometimes have a beef with people who claim the title of Artist because they used an AI to create some artwork, yet they could not draw a stick figure to save their life. Are they artists or are they programmers?

    Quote Originally Posted by Mephisto View Post
    As long as you don't claim it to be a Picasso, you would be fine. And an interesting detail: German copyright law protects the work of a human, therefore AI pictures are free to use and don't carry a copyright as they weren't produced by a human being.
    There have been court rulings in Europe and the US that specifically target AI-generated artwork in the last few years, stating that AI-generated work (art, written material, etc) can not be copyrighted or trademarked as it is all derivative material.

    There is also a growing market of AI systems that now sniff out AI-generated works that have copy-pasted material from artists and authors without their permission.

    Quote Originally Posted by Nylanfs View Post
    My opinion is largely this, There are art schools where people learn art, from other well known artists works and technique's. Aside from the ones where it's altering and combining the actual images together, the ones that are actually using machine learning to understand what somebody means visually "A bowl of fruit in a wooden bowl on a pedestal next to a window, in the style of Rembrandt" is literally no different than a human student learning to do the same thing.
    Now I will say, that I am approaching this discussion as someone who has had to fix the errors in artwork created by AI-generated art. So I might be a bit biased and jaded on the subject, and the technology I am certain has moved on quite a bit from when I used to do that work. The AI is not really learning the techniques of how to draw, how to paint light and shadow, or how to mix colours. That is the ultimate aim, to have AI that can learn those skills. But it is not really learning those skills in that way, you are personifying these early AI systems, putting it into human terms that you can understand. This is how we ended up with Gods for natural phenomena that early people could wrap their heads around.

    AI will eventually get to that point where it will learn a style and techniques and will create material using those lessons, but I don't think it is there yet. An AI that can do this, will not need the data sets the AI of today need.
    Last edited by Tailz Silver Paws; February 27th, 2024 at 03:32.
    Tailz, the Artist of Studio WyldFurr
    Follow Studio WyldFurr on Twitter, Facebook, and the studio web site.
    "The London Underground, is not a resistance movement!"

  2. #12
    LordEntrails's Avatar
    Join Date
    May 2015
    Location
    -7 UTC
    Posts
    17,274
    Blog Entries
    9
    Quote Originally Posted by Tailz Silver Paws View Post
    The problem I have is a lot of the Data Sets AI's have been trained on, using material that was pirated from artists around the world. For example, I read an article about how Midjourney scraped DeviantArt for a lot of its training data set. Which means some of my old comic art trained Midjourney. The New York Times lodged a court case against another AI platform that scaped the New York Times for its training data set.
    <snip>
    There have been court rulings in Europe and the US that specifically target AI-generated artwork in the last few years, stating that AI-generated work (art, written material, etc) can not be copyrighted or trademarked as it is all derivative material.
    <snip>
    AI will eventually get to that point where it will learn a style and techniques and will create material using those lessons, but I don't think it is there yet.
    As a layman, I have a thoughtful but relatively uninformed opinion, and I find this topic interesting in many ways. For instance, you first comment implies that AI learning form a source that is published online is piracy. I don't know the answer, but if a human were to browse Deviant Art and learn by replicating what they see there hundred, thousands or millions of times, we might consider that derivative, but we wouldn't call it piracy?

    Humans are not the same as computer algorithms, but, they are not also so different that I don't see some similarities It will be interesting to see where both the technology and the law end up in a few decades.

    Problems? See; How to Report Issues, Bugs & Problems
    On Licensing & Distributing Community Content
    Community Contributions: Gemstones, 5E Quick Ref Decal, Adventure Module Creation, Dungeon Trinkets, Balance Disturbed, Dungeon Room Descriptions
    Note, I am not a SmiteWorks employee or representative, I'm just a user like you.

  3. #13
    Quote Originally Posted by LordEntrails View Post
    As a layman, I have a thoughtful but relatively uninformed opinion, and I find this topic interesting in many ways. For instance, you first comment implies that AI learning form a source that is published online is piracy.
    Piracy might be the wrong term, and I suspect there is a specific legal term, which I can't remember. Anyway.... it is considered "wrong" because the groups that created these AI systems did not obtain permission to use the content (artwork or written material) as the datasets that the AI was trained on.

    Actually, the term might be, plagiarism. But even that is not really accurate, as I think plagiarism deals specifically with written content.

    Quote Originally Posted by LordEntrails View Post
    I don't know the answer, but if a human were to browse Deviant Art and learn by replicating what they see there hundred, thousands or millions of times, we might consider that derivative, but we wouldn't call it piracy?
    Depends on the output.... if the artist just did copy-pasta then it is the art version of plagiarism. Think of the old days when it was profitable to create forgeries of paintings by Rembrandt, and try and hock them off as an original Rembrandt. That's the original copy-pasta!

    But now we get into AI, which might have scraped the whole of Deviant Art for its data set. But when it is creating art, it is not using an entire piece of artwork, it might just be using a part of it, a very small part, combined with lots of other parts from lots of other pictures from across the whole dataset.

    I think our current laws consider a copy, something that has 80% similarity to an original work, but what about a work that only uses one or two percent of another few hundreds of artworks to create its artwork?

    I so often see the juxtaposition between a Human learning to draw and an AI "learning" and I think of the old lessons I once studied of how to shade a cube, sphere, and pyramid. Do you think a current AI would be able to create the fantastic artwork we have seen if it was just "taught" those shading lessons? Or if it was given scans of a Dynamic Anatomy by Burne Hogarth. How would it go drawing the human form? I can tell you, AI is famous for "sausage" hands and fingers. AI relies on large datasets to create those fantastic artworks, HUmans learn techniques, that we then use to create visual representations of our imagination.

    AI will eventually get to that same ability, but it is not there yet.

    Quote Originally Posted by LordEntrails View Post
    Humans are not the same as computer algorithms, but, they are not also so different that I don't see some similarities It will be interesting to see where both the technology and the law end up in a few decades.
    The problem right now is that AI relies on material created by other people - ad sadly the majority of that source material has been stolen without the permission or consent of those people.

    When an AI can be given the shading lesson of the cube, sphere, and pyramid, and then create dynamic artwork, then its where we think it will be.
    Tailz, the Artist of Studio WyldFurr
    Follow Studio WyldFurr on Twitter, Facebook, and the studio web site.
    "The London Underground, is not a resistance movement!"

  4. #14
    Quote Originally Posted by Tailz Silver Paws View Post
    AI will eventually get to that same ability, but it is not there yet.
    I seriously doubt that it will. When you boil it down to the basics, it's doing nothing more than comparing strings of 1's and 0's and coming up with similarities. Computers are never going to be able to think, no matter how many if then statements you throw at it. All of this talk of AI is just smoke and mirrors for more complex software. It's interesting what they have come up with, but you will never be able to tell a piece of software to go do something it hasn't already been fed an example of. For instance, tell it to go paint the Sistine Chapel, and you will merely get some cobbled together piece of art that is most likely copied from the original. There's no intelligence involved, and it's not 'thinking'. It's just large models of data being scoured and cataloged. I am going to just pause my rant right here, which is something a computer could never do, make a decision that isn't based on something it hasn't been programmed to do.

  5. #15
    ddavison's Avatar
    Join Date
    Sep 2008
    Posts
    6,135
    Blog Entries
    21
    I think it is important to not label all AI engines as having trained data from sources without permission, or even painting them all with the same broad strokes based on the specific output from one or another model. There are numerous engines available and some, such as Midjourney, were known to have sourced data from DeviantArt. That may be true for others or maybe not. Stable-diffusion can be set up to run locally and be sourced from any data the user chooses. It is up to the user to choose the source. Our staff artist, Josh, pointed out that we could essentially train it on all the art he has generated for SmiteWorks under a work-for-hire and then it would in theory be able to produce additional artwork in his same style. Adobe uses its own internal stock art that they own, and exclusively that. ChatGPT uses a version of DALL-E and it was trained on images online as well as a licensed Shutterstock library. It has safeguards in place to restrict asking for art in the style of living artists, as well as other features.

  6. #16
    Quote Originally Posted by indavis View Post
    I seriously doubt that it will. When you boil it down to the basics, it's doing nothing more than comparing strings of 1's and 0's and coming up with similarities. Computers are never going to be able to think, no matter how many if then statements you throw at it. All of this talk of AI is just smoke and mirrors for more complex software. It's interesting what they have come up with, but you will never be able to tell a piece of software to go do something it hasn't already been fed an example of. For instance, tell it to go paint the Sistine Chapel, and you will merely get some cobbled together piece of art that is most likely copied from the original. There's no intelligence involved, and it's not 'thinking'. It's just large models of data being scoured and cataloged. I am going to just pause my rant right here, which is something a computer could never do, make a decision that isn't based on something it hasn't been programmed to do.
    I'll have to beg to differ... we are the sum of our learned experiences, modeled and catalogued by our brains. I think AI will evolve just the same. The problem we face is the fast-paced race to create a marketable product has led greedy entrepreneurs to cut corners and take risks for that financial reward.

    Quote Originally Posted by ddavison View Post
    I think it is important to not label all AI engines as having trained data from sources without permission, or even painting them all with the same broad strokes based on the specific output from one or another model. There are numerous engines available and some, such as Midjourney, were known to have sourced data from DeviantArt. That may be true for others or maybe not. Stable-diffusion can be set up to run locally and be sourced from any data the user chooses. It is up to the user to choose the source.
    I totally agree, not all AI platforms were trained on material scraped from here, there, and everywhere. But I think a good many, in their early development, were. For simple financial reasons, those data sets would be expensive to obtain for experimental projects.

    Quote Originally Posted by ddavison View Post
    Our staff artist, Josh, pointed out that we could essentially train it on all the art he has generated for SmiteWorks under a work-for-hire and then it would in theory be able to produce additional artwork in his same style.
    I see such leading into the same discussion that is happening in the movie industry over CGI versions of actors.

    I could see an AI trained on an artist's work as being a very beneficial tool for the artist. Almost all digital artists now use AI in some form of AI-powered tool. Smart selection tools, more realistic effect brushes, or brush stroke engines, etc. But an AI trained on an artist's own material could result in a sticky situation in anyone else's hands other than the original artist, as mentioned above with the CGI Version of famous actors.

    Quote Originally Posted by ddavison View Post
    Adobe uses its own internal stock art that they own, and exclusively that. ChatGPT uses a version of DALL-E and it was trained on images online as well as a licensed Shutterstock library. It has safeguards in place to restrict asking for art in the style of living artists, as well as other features.
    I agree, that is the way it should be done, with material where the content creators are reimbursed for their work.

    Although I was talking with a friend on the weekend who is doing his doctorial thesis on AI. He was talking about a project by artists aimed at AI that are still scraping the net for their data sets. They are purposefully throwing artwork at those AI's in an attempt to muddy those data sets so that the data sets become corrupted.
    Last edited by Tailz Silver Paws; February 28th, 2024 at 05:13.
    Tailz, the Artist of Studio WyldFurr
    Follow Studio WyldFurr on Twitter, Facebook, and the studio web site.
    "The London Underground, is not a resistance movement!"

  7. #17
    One reason for Chaosium and other companies stand is that anything generated by AI or derived from an AI generated work is not copyrightable. But it is critical (if a bit technical) to understand that charGPT and AI image generators are not actually AI, they are doing statistical sampling from a large dataset--in other words, all they are is gifted copiers (which is why the Copyright office and the courts have decided that is it is not protected work).

    For home/personal use, use it (copyright has always allowed for personal use), but I know chatGPT has quoted material that I hold the copyright on and I never gave permission for it to be sampled. The same holds true for trademarked and copyrighted art work. If you take artwork from an AI program and it is found that it contains a copyrighted image, you are guilty of violating the copyright if you publish it (no different than any other stolen property).

    Chaosium phrased their statement to protect the artists, but it is also protecting themselves. If they publish a work that is found to contain something in violation of copyright, they are liable (if they bought it from a freelancer, that person is also liable but it doesn't absolve Chaosium--the ban on AI does allow them to try to recoup the costs from the freelancer though).

Thread Information

Users Browsing this Thread

There are currently 1 users browsing this thread. (0 members and 1 guests)

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •  
DICE PACKS BUNDLE

Log in

Log in