It would be the same with liberal talking points and in general any human talking point.
Humans try to change the reality the way they want it, thus things they say are always incorrect. When they want to increase something, they make it appear less than IRL, usually. Also appearances are not universal.
Humans also simplify things acceptably for one subject, but not for another.
Humans also don’t know what “correct information” is.
A lot of philosophy connected to language starts mattering, when your main approach to “AI” is text extrapolation.
Math is correct without humans. Pi is the same in the whole universe. There are scientific truths. And then there are the the flat earth, 2x2=1, qanon anti vax chematrail loonies, which in different degrees and colour are mostly united under the conservative “anti science” folks.
And you want an Ai that doesn’t offend these folks / is taught based on their output. What use could that be of?
So you’re saying you lie to try and change reality or present it in a different way?
That’s horrible and I certainly don’t subscribe to this mentality. I will discuss things with people with an open mind and a willingness to change positions if presented with new information.
We are not arguing out of some tribal belief, we have our morals and we will constantly test them to try and be better humans for our fellow humans.
Only because you are a layer does not conclude that all humans are egoistic layers. Of course there are a lot of them, but it is not a general human thing, it’s cultural and regional. Layers want you to believe that everyone is lying all the time, that makes their lives more easy. But feel free to not believe me 😇.
It would be the same with liberal talking points and in general any human talking point.
Humans try to change the reality the way they want it, thus things they say are always incorrect. When they want to increase something, they make it appear less than IRL, usually. Also appearances are not universal.
Humans also simplify things acceptably for one subject, but not for another.
Humans also don’t know what “correct information” is.
A lot of philosophy connected to language starts mattering, when your main approach to “AI” is text extrapolation.
Math is correct without humans. Pi is the same in the whole universe. There are scientific truths. And then there are the the flat earth, 2x2=1, qanon anti vax chematrail loonies, which in different degrees and colour are mostly united under the conservative “anti science” folks.
And you want an Ai that doesn’t offend these folks / is taught based on their output. What use could that be of?
So you’re saying you lie to try and change reality or present it in a different way?
That’s horrible and I certainly don’t subscribe to this mentality. I will discuss things with people with an open mind and a willingness to change positions if presented with new information.
We are not arguing out of some tribal belief, we have our morals and we will constantly test them to try and be better humans for our fellow humans.
No. You are damn fucking well illustrating what I said, though.
I think you hurt peoples feelings lmao.
The truth just isnt very catchy. Thanks for trying though. Im still on lemmy for people like you.
Only because you are a layer does not conclude that all humans are egoistic layers. Of course there are a lot of them, but it is not a general human thing, it’s cultural and regional. Layers want you to believe that everyone is lying all the time, that makes their lives more easy. But feel free to not believe me 😇.