|
Post by Valhalla Erikson on Feb 5, 2023 0:23:58 GMT -6
That's a valid point. Another thing I do take issue with is the censorship in regard to both MidJourney and ChatGPT. Say someone wants MidJourney to illustrate art for their story. And the story contains violence. The software prohibits such concepts, to a questionable degree.
It's that which rubs me the wrong way. Either you're a writer or artist or both you should've had your work censored or handicapped. Being a creator is a thankless job and you'll always find someone who'd get easily offended.
For a personal example my dark fantasy story The Ballad of Ragna Skarsgard has violence and subject matter that might not be easy on the reader. I've made peace with that, but I've decided to see the story through. A creator shouldn't censor themselves because they're afraid of offending someone. Because it's not doing the creator any favors and it limits their imagination.
|
|
|
Post by Valhalla Erikson on Feb 5, 2023 0:46:30 GMT -6
There are some perks. It helped me produce a book cover for my urban fantasy series. That and photoshop.
|
|
|
Post by havekrillwhaletravel on Feb 5, 2023 3:02:36 GMT -6
I also have mixed feelings about AI art. I've heard the training-sets argument Bird mentioned, but I don't understand enough about AI to agree/disagree with it. To me, it sounds similar to how humans make art? If we want to write a fantasy book, we read a ton of fantasy books for inspiration and ideas on how the genre works. So, I'm not sure why it's not okay for AI to go through the same process. My reservation is that we still haven't adapted our way of life and our legislation to handle the last technological breakthrough - the Internet. And we've seen the real costs of that failure in the recent pandemic, in the 2016 US election. Heck, I don't think we've even fully adjusted to the mass use of computers yet. How are we going to deal with the economic, social, legal ramifications of AI? I feel like we're building a fancy Ferrari and hopping in before we've even learned how to pedal a bike.
Anyway, here's a thing I did with ChatGPT:
|
|
Bird
Counselor
Posts: 350
Custom Title: World Creator and Destroyer
Preferred Pronouns: they/them/their
HARD: 1700
MEDIUM: 400
EASY: 110
|
Post by Bird on Feb 5, 2023 16:27:23 GMT -6
I also have mixed feelings about AI art. I've heard the training-sets argument Bird mentioned, but I don't understand enough about AI to agree/disagree with it. To me, it sounds similar to how humans make art? If we want to write a fantasy book, we read a ton of fantasy books for inspiration and ideas on how the genre works. So, I'm not sure why it's not okay for AI to go through the same process. My reservation is that we still haven't adapted our way of life and our legislation to handle the last technological breakthrough - the Internet. And we've seen the real costs of that failure in the recent pandemic, in the 2016 US election. Heck, I don't think we've even fully adjusted to the mass use of computers yet. How are we going to deal with the economic, social, legal ramifications of AI? I feel like we're building a fancy Ferrari and hopping in before we've even learned how to pedal a bike.
Part of the argument against the AI training sets is how the sets work:
1. The training set takes the EXACT COPY of the work EXACTLY AS IT IS. It does this for MILLIONS of artworks. There is usually no asking for permission. The equivalent of this with writing would be if you took several fantasy book verbatim and reprinted them under your name. All of this artwork is then part of the database that the AI will pull from when it gets its queries, meaning all of the art taken without permission exists within the AI engines in their original form. 2. The AI algorithms then go through the training sets again and again to build up profiles of certain tag words. So when users input those specific words, the AI knows to pull up that piece(s) of art to smash together with another piece(s) of art. This is the recombining and creating anew. But to get to this point, millions of artwork was taken without permission and used to train the AI enough for it to actually try to create something different. The issue is with step 1. That's where the legality of it all is starting to come to a head. And I think it's right to examine the ethics and morality of step 1. When people create things, they don't create them to be used to make fancy-tech-AIs. They created them to be enjoyed by others. There's a consent element here, that big tech refuses to acknowledge, but we MUST acknowledge if we are to build a better world. When we write stories inspired by things, we aren't taking those stories verbatim, yet big tech companies try to claim it is fine if AI does that? I don't think it is. I think we need a legal framework where these training sets must use artwork with permission. To create a culture of consent here. I wouldn't want all the writing I did to just be used in a training set without my knowledge. I'd want them to ask permission, and to show me how that training set will influence the output of that engine. I'd want to know why they are bothering to make this at all. What's the purpose? Is the purpose to churn out a bunch of shitty articles (like CNet is doing with its financial advice column, it's saturating the web with shitty articles written by AI, that experts then examined and found to be factually wrong a lot of the time. They're doing it to try to push their site up higher in the search rankings) OR will it be used as a way to embolden people's imagination and creativity or research language/art in general?
Technology can be used for good or ill. And I really think those creating these programs need to take a step back and think long and hard about the ethnics and reasons for why they are creating what they are creating. We shouldn't just create whatever and then get upset when there are hefty consequences to what we created -- it's better to truly think through the ethics and purpose before creation, so that we don't cause more harm than good. That's something the Tech sector has failed to do thus far.
So that's more of a clarification of my statement. I hope that helped clarify!
|
|
Bird
Counselor
Posts: 350
Custom Title: World Creator and Destroyer
Preferred Pronouns: they/them/their
HARD: 1700
MEDIUM: 400
EASY: 110
|
Post by Bird on Feb 5, 2023 16:43:00 GMT -6
There are some perks. It helped me produce a book cover for my urban fantasy series. That and photoshop.
Oh for sure!
There's pros and cons to all of this.
I think if we can sort through a better legal framework for the step 1 (training sets) of these AI engines, then maybe this can be a useful tool. But we first gotta get through that battle, so I have been waiting and watching to see how that battle falls.
Does that mean people shouldn't use it? That's not for me to say. I'm just sharing what I know of the legal battles and the problems with ethics these AI engines currently have. I know that some people adhere to their ethics very intently, so may appreciate this knowledge as they make an educated decision on what to do with it. I'm also not saying folks shouldn't use it -- and I'm not bothered that folks do use the current tools as is (at least if they are using it for fun. I am bothered if the tools are used in classes as if they are legitimate sources, when they can't be, that's not how the engines work.)
For the writing AI stuff:
I do think that ChatGPT, as it currently exists, is turning into a harmful tool, because it's spits out words as if they are facts, but if you research some of what it spits out, it's not always factual. Sometimes it's just random, so I think those professors (and yes, I've discovered a lot of professors are playing with this tool in their classes) allowing students to use it as a source is wrong. You can't rely on a tool that isn't able to distinguish facts and fiction - that pulls words/sentences based on key words in queries. This is only gonna hurt people's ability to distinguish truth from fiction. So unless the tool is being used to help students learn critical thinking and media literacy (most situations I've read does not use it that way tho), professors probably should NOT be using it as a source.
A funny aside: I think my favorite story about the troubles with the writing AI ChatGPT, is the stories of librarians sharing on Mastodon, who keep reporting that folks come in asking for a book that was recommended to them by the ChatGPT, except the AI suggested a book that doesn't exist at all. It just combined a bunch of titles and assigned it to a random author, and the librarians have to explain why it's not always good to trust AI as well as show the person what the actual real books by that author is (often the books aren't at all related to what the person had been asking ChatGPT about).
Some may argue that ChatGPT makes it clear it's not always factual, and honestly, it doesn't make it clear -- not when it first was released, where its claims to pull data for people was overhyped. It's only after more and more incidents like the above that disclaimers were added into the interface.
And that's the problem. Tech shouldn't wait until the consequences start harming folks and/or causing a ruckus. Tech should have examined the ethics and purpose of this before hand and built in ethical solutions (and disclaimers) and kept the disclaimer front and center from the very beginning.
In general:
So yes, I may be focusing on the cons right now. But I think that's important to examine while we decide on whether to use these tools. Our ethics come into play here, and should we support engines that fail our own ethics? That's a question only we can answer for ourselves. In the meantime, we can keep putting pressure on Tech folks to actually learn about ethics and how to do better, so that we don't have to have these conversations and legal battles. We can make ethical technology, but we have to pressure the tech groups to do that.
|
|
|
Post by Valhalla Erikson on Feb 5, 2023 18:07:00 GMT -6
There are some perks. It helped me produce a book cover for my urban fantasy series. That and photoshop.
Oh for sure!
There's pros and cons to all of this.
I think if we can sort through a better legal framework for the step 1 (training sets) of these AI engines, then maybe this can be a useful tool. But we first gotta get through that battle, so I have been waiting and watching to see how that battle falls.
Does that mean people shouldn't use it? That's not for me to say. I'm just sharing what I know of the legal battles and the problems with ethics these AI engines currently have. I know that some people adhere to their ethics very intently, so may appreciate this knowledge as they make an educated decision on what to do with it. I'm also not saying folks shouldn't use it -- and I'm not bothered that folks do use the current tools as is (at least if they are using it for fun. I am bothered if the tools are used in classes as if they are legitimate sources, when they can't be, that's not how the engines work.) For the writing AI stuff:
I do think that ChatGPT, as it currently exists, is turning into a harmful tool, because it's spits out words as if they are facts, but if you research some of what it spits out, it's not always factual. Sometimes it's just random, so I think those professors (and yes, I've discovered a lot of professors are playing with this tool in their classes) allowing students to use it as a source is wrong. You can't rely on a tool that isn't able to distinguish facts and fiction - that pulls words/sentences based on key words in queries. This is only gonna hurt people's ability to distinguish truth from fiction. So unless the tool is being used to help students learn critical thinking and media literacy (most situations I've read does not use it that way tho), professors probably should NOT be using it as a source. A funny aside: I think my favorite story about the troubles with the writing AI ChatGPT, is the stories of librarians sharing on Mastodon, who keep reporting that folks come in asking for a book that was recommended to them by the ChatGPT, except the AI suggested a book that doesn't exist at all. It just combined a bunch of titles and assigned it to a random author, and the librarians have to explain why it's not always good to trust AI as well as show the person what the actual real books by that author is (often the books aren't at all related to what the person had been asking ChatGPT about).
Some may argue that ChatGPT makes it clear it's not always factual, and honestly, it doesn't make it clear -- not when it first was released, where its claims to pull data for people was overhyped. It's only after more and more incidents like the above that disclaimers were added into the interface.
And that's the problem. Tech shouldn't wait until the consequences start harming folks and/or causing a ruckus. Tech should have examined the ethics and purpose of this before hand and built in ethical solutions (and disclaimers) and kept the disclaimer front and center from the very beginning.
In general:
So yes, I may be focusing on the cons right now. But I think that's important to examine while we decide on whether to use these tools. Our ethics come into play here, and should we support engines that fail our own ethics? That's a question only we can answer for ourselves. In the meantime, we can keep putting pressure on Tech folks to actually learn about ethics and how to do better, so that we don't have to have these conversations and legal battles. We can make ethical technology, but we have to pressure the tech groups to do that.
Speaking from personal experience, as a writer, the ChatGPT program can be useful if you want to do fan fiction. Or a short story. But if you want to write a novel-length story then that is when you have to rely on your own imaginative creativity. Yet I find the program fun. Especially when I'd do a DND Moral Alignment for my characters to see where they stand on the morality scale. And it's also useful if you want to make a fictional country.
|
|
Bird
Counselor
Posts: 350
Custom Title: World Creator and Destroyer
Preferred Pronouns: they/them/their
HARD: 1700
MEDIUM: 400
EASY: 110
|
Post by Bird on Feb 5, 2023 18:48:28 GMT -6
Oh for sure!
There's pros and cons to all of this.
Speaking from personal experience, as a writer, the ChatGPT program can be useful if you want to do fan fiction. Or a short story. But if you want to write a novel-length story then that is when you have to rely on your own imaginative creativity. Yet I find the program fun. Especially when I'd do a DND Moral Alignment for my characters to see where they stand on the morality scale. And it's also useful if you want to make a fictional country.
And that's the pro of it -- when you use it for fictional things, then it has some good use to it. It's when people try to use it for factual things that I find it ethically wrong and problematic.
Okay, totally random aside: I also find the term AI to be misleading too as none of these programs are actually intelligent in any way. They are layers of algorithms that are coded to learn as they take in more data and do finer passes over training sets. They require inputs to function and spit out outputs, but the programs can't distinguish fact from fiction or engage in any self-examination of what it outputs. So I wish they used a more accurate name. LOL Like, FA for Fun Algorithms or LA for Learning Algorithms.
|
|
|
Post by Valhalla Erikson on Feb 5, 2023 19:57:26 GMT -6
In regard to midjourney I'd say it's ok to use it but try and develop your own artistic style from it. Even if you can't draw to save your life midjourney is a useful tool. But taking someone's artistic style is just as bad as plagiarizing a writer's work.
|
|
|
Post by RAVENEYE on Feb 6, 2023 11:18:09 GMT -6
Though thinking about this in depth.... I guess for me, I am uncertain how I feel about AI art. I think I'd be more interested in it IF the training sets used to make the AI engines weren't stealing people's art to do it. There's been a LOT of pressure from artists furious at their work being used without permission to train these engines. Some art places are starting to draft legal stuff to prohibit using the art for training sets. So it's a quagmire of legal stuff at the moment. I think if the AI engine created a training dataset that used only art for which it had permission, then I'd be more willing to enjoy AI art generation. I really think the ethics of how these training sets are set up needs to be examined and worked out properly. It's the same with that ChatGPT, which is AI writing, as they stole a lot of writing works to create a training dataset. There's another AI generation, which only used the creative works in public domain or with permission that works decently well - I can't remember it's name as it was more of an AI writing one instead of art. Ah! I wondered how the heck these AIs got their "knowledge." I figured they just plumbed the web for available comparisons and compiled elements from all over the place. Interesting. Still... I'm in love with MJ, so until it gets shut down for theft or whatever, I'll be plugging in prompts.
|
|
|
Post by RAVENEYE on Feb 6, 2023 11:23:36 GMT -6
Anyway, here's a thing I did with ChatGPT: Hehe, crack me up. Well, it can write a passable blurb/outline anyway.
|
|
|
Post by Valhalla Erikson on Feb 6, 2023 13:53:23 GMT -6
Although I do see a situation where MidJourney would end up in some legal trouble over artist accusing it stealing their art style. And it can be easy to produce art based on the style of an artist.
|
|
|
Post by RAVENEYE on Feb 6, 2023 14:43:10 GMT -6
There are some perks. It helped me produce a book cover for my urban fantasy series. That and photoshop.
Gah! Love this cover!
|
|
|
Post by RAVENEYE on Feb 6, 2023 14:51:09 GMT -6
And yeah, here's an example of why this art generation thing is so important to me. And I'm going to embarrass myself hugely here. When I was testing out the self-pub thing a loooong time ago, this was the best I could do for my story "Fire Eater": Yep, that's embarrassing. And I totally nicked that eye from somewhere, cut it out, duplicated and, used it twice. Don't laugh too loud, I was broke and didn't know how to find cover artists. Guess how many downloads of that story I sold? About two. Never mind that it got great reviews when it came out in the original magazine. Cuz, yeah, cover art matters. On to the eye candy. This is the gorgeousness MJ gives me with the following prompt: "A closeup of demonic red eyes, slit pupils like a snake, red skin, wreathed in flame, Octane render --ar 2:3" -- OMG, I would so buy this. Fonts and placement are all tentative, but I'm definitely re-releasing this story with this cover image.
|
|
|
Post by RAVENEYE on Feb 6, 2023 15:02:00 GMT -6
Also, Bird - I've been trying to create the trees on Elivera for you. So far I've gotten these returns: Lots of purple, and beautiful, but I don't think they're quite right.
|
|
Bird
Counselor
Posts: 350
Custom Title: World Creator and Destroyer
Preferred Pronouns: they/them/their
HARD: 1700
MEDIUM: 400
EASY: 110
|
Post by Bird on Feb 6, 2023 17:26:55 GMT -6
Also, Bird - I've been trying to create the trees on Elivera for you. So far I've gotten these returns: Lots of purple, and beautiful, but I don't think they're quite right. OMG. That is awesome. The bottom one with the huge-ass trees is closer, but the buildings and infrastructure are in the canopy, and the trees grow a lot closer together (their branches tend to intertwine, since essentially the Raliok forest is one giant tree - like how aspens are essentially all one tree? Certain types of aspens grow in large clonal colonies from one seed). The Cities in the canopy are built over multiple intersecting branches. The violet colors in these are really close to the violet colors of Elivera though!!! Thank you!!
EDIT: Raveneye, how you feel about it and your use of it is valid. I was mostly just sharing the legal issues that the creators of these engines are facing. I'd feel better about using it myself once the creators implement ethical training sets, but I think if others find it helpful and are currently using it, then that's fine. : )
|
|