• 0 Posts
  • 104 Comments
Joined 1 year ago
cake
Cake day: May 31st, 2023

help-circle









  • where anyone thinks it’s ok or normal to recommend suicide to people

    Except that’s already happening even without it being normalized, there have always been assholes that are gonna tell people to kill themselves, especially if they’ve never seen the person they’re talking to before. I don’t see how this is any different.

    Literally the whole thing would not have happened without the policy.

    It also wouldn’t have happened if a fucked up system wasn’t withholding actual, reasonable alternatives that the person was clearly asking for. That’s my point. Let’s fix the actual problems, rather than try to silence the symptoms.


  • …and did you notice how everyone was outraged by that? That incident was not an issue with assisted suicide being available, that was an issue with fucked up systems withholding existing alternatives and a tone-deaf case worker (who is not a doctor) handling impersonal communications. Maybe it’s also an issue with this kind of thing being able to be decided by a government worker instead of medical and psychological professionals. But definitely nothing about this would have been made better by assisted suicide not being generally available for people who legitimately want it, except the actual problem wouldn’t have been put into the spotlight like this.


  • I don’t want to create a future where, “I’ve tried everything I can to fix myself and I still feel like shit,” is met with a polite and friendly, “Oh, well have you considered killing yourself?”

    Are you for real? This kind of thing is a last resort that nobody is going to just outright suggest unprompted to a suffering person, unless that person asks for it themselves. No matter how “normalized” suicide might become, it’s never gonna be something doctors will want to recommend. That’s just… Why would you even think that’s what’s gonna happen





  • I was thinking of an approach based on cryptographic signatures. If all images that come from a certain AI model are signed with a digital certificate, you can tamper with metadata all you want, you’re not gonna be able to produce the correct signature to add to an image unless you have access to the certificate’s private key. This technology has been around for ages and is used in every web browser and would be pretty simple to implement.

    The only weak point with this approach would be that it relies on the private key not being publicly accessible, which makes this a lot harder or maybe even impossible to implement for open source models that anyone can run on their own hardware. But then again, at least for what we’re talking about here, the goal wouldn’t need to be a system covering every model, just one that makes at least a couple models safe to use for this specific purpose.

    I guess the more practical question is whether this would be helpful for any other use case. Because if not, I hardly doubt it’s gonna be implemented. Nobody is gonna want the PR nightmare of building a feature with no other purpose than to help pedophiles generate stuff to get off to “safely”, no matter how well intentioned