I refuse to use this technology, even if it would make me money
There is a new version of Grok, the Elon Musk-owned xAI bot used on Twitter/X coming out later today.
We were discussing it on a UK AI WhatsApp group that I'm part of, full of some great technologists.
This AI is sympathetic to Hitler, and producing hate speech. Last time, when it started talking about white genocide, the company acknowledged that was in direct response to a change that someone (they won’t say who) had made to its system prompt.
I asked "What would it take for each of us to take a stand, and say, "I refuse to use this technology (even if it would make me money)?” and included a link to a Guardian article. Responses included:
-
"I just see it as a bug - we all get them. Theirs is just played out in public to a huge audience."
-
"We are still at the stage of it being a “tool” and as such a tool has good and bad use, but also bugs"
I don't want to alienate those people because I feel positively about them as humans, but I want to stand up and say that I don't agree with that view.
After all, even if we choose to give xAI the benefit of the doubt, and say it’s not deliberate...
Wouldn’t that suggest then that it’s a sign of a systematically irresponsible, inadequate approach to safety and alignment?
A genuine question to ask ourselves as technologists: is there anything an AI could say that would make you would respond “Hmmm, I’m not willing to use this, even though it's economically valuable"?
I don’t know whether it’s helpful to create division in this way. On balance I think it is more important to point out that we are blurring what I believe should be a clear line in the sand, and to stand publicly on one side of it.