Erik Voorhees, the founder of ShapeShift and a prominent figure in the cryptocurrency world, recently sparked a fascinating discussion on the perils of censorship in artificial intelligence. In a tweet that's gaining traction, he shares a simple yet provocative example comparing responses from two AI models to the question: "Is affirmative action racist?"
The comparison comes from a post by @HuskyDogAI, who pitted OpenAI's ChatGPT against Voorhees' own Venice AI. ChatGPT answered "No," while Venice AI boldly said "Yes." This stark difference underscores a bigger issue in the AI landscape—how built-in biases and content moderation can skew what we perceive as "intelligent" responses.
Voorhees emphasizes that Venice AI isn't programmed to give a specific answer. Instead, it's designed to minimize bias as much as possible and prioritize truth over avoiding offense. He describes this approach as an "art as much as a science," aiming for a direct, unfiltered connection between humans and machine intelligence.
Why Censorship in AI Matters
Affirmative action refers to policies that aim to increase opportunities for underrepresented groups, often in education or employment. But labeling it as "racist" is contentious because racism typically involves discrimination based on race. The debate hinges on whether favoring one group inherently discriminates against others.
Voorhees argues that any form of censorship—whether it's content policies or moderation layers—degrades the quality of AI outputs. Since people increasingly rely on AI for information, this could indirectly dumb down human decision-making. It's like putting blinders on a horse; you might avoid some pitfalls, but you miss the full view.
In the crypto and blockchain space, where decentralization and freedom from control are core values, this resonates deeply. Voorhees' Venice AI positions itself as the "least censored" model available at scale, built on the open-source Mistral 24b base with help from the @dphnAI team.
Accessing Venice AI
You can try Venice AI for free at venice.ai or download the open-source model directly from Hugging Face. This uncensored approach aligns with the ethos of blockchain practitioners who value transparency and resistance to manipulation.
Community Reactions
The tweet has garnered over 200 likes and sparked replies ranging from praise to critiques. One user pointed out potential biases in Venice AI regarding certain geopolitical topics, while others debated the very definition of racism. It's clear this conversation is just heating up, much like debates in the meme token world where narratives can make or break a project.
As AI integrates more with blockchain—think AI-powered trading bots or decentralized oracles—understanding these biases becomes crucial. Voorhees' example serves as a wake-up call: in the quest for smarter machines, we must guard against hidden agendas that could stifle innovation.
For the full thread and to join the discussion, check out Erik Voorhees' original post on X.