• 0 Posts
  • 38 Comments
Joined 4 months ago
cake
Cake day: March 13th, 2024

help-circle




  • This article conveniently omits Israel-Palestine relations prior to and during periods of minimal US meddling. Let’s take a look at the prelude to the current conflict to get our bearings.

    Obama made statements early on in his presidency about lasting peace in the Middle East. His first meeting with Netanyahu was a disaster, and so he dropped the issue for his entire term. 8 years of pretty much ignoring the Palestinians. Trump enters office and likewise makes public statements supporting lasting peace. His meetings with Netanyahu were a great success…for Israel specifically. The US changed policy to state that illegal Israeli settlements were legal, it recognized Jerusalem as the capitol of Israel with Israel as the sole owner of the city, and it began to normalize relations between Saudi Arabia and Israel. All of this was a big kick in the pants to Palestine, who were never consulted for any of these policy changes.

    Biden entered office and continued to push for normalization of relations between Israel and Saudi Arabia, but let’s be honest, he had a similar do-nothing attitude as Obama had when it comes to lasting peace.

    Then Hamas attacks Israel. The US hadn’t engaged them for over a decade and Arab nations were starting to normalize relations with Israel with no regard for Palestine. It is hard to imagine what else Hamas could have done to get the attention of the US and Arab nations.

    And that brings us to the present, where Israel’s retaliation has once again captured the attention of the US and Arab nations and put the needs of the Palestinians in the minds of their leaders.

    In my opinion, if we had meddled more during peace time and engaged with Palestinians in the absence of conflict, then we could have avoided the current war altogether. The current conflict appears to be the result of the absence of US meddling, or at the very least an unwillingness to recognize the needs of Palestinians during times of relative peace.


  • Go to pubmed. Type “social media mental health”. Read the studies, or the reviews if you don’t have the time.

    The average American teenager spends 4.8 hours/day on social media. Increased use of social media is associated with increased rates of depression, eating disorders, body image dissatisfaction, and externalizing problems. These studies don’t show causation, but guess what, we literally cannot show causation in most human studies because of ethics.

    Social media drastically alters peer interactions, with negative interactions (bullying) associated with increased rates of self harm, suicide, internalizing and externalizing problems.

    Mobile phone use alone is associated with sleep disruption and daytime sleepiness.

    Looking forward to your peer-reviewed critiques of these studies claiming they are all “just vibes.”






  • I remember hearing this argument before…about the Internet. Glad that fad went away.

    As it has always been, these technologies are being used to push us forward by teams of underpaid unnamed researchers with no interest in profit. Meanwhile you focus on the scammers and capitalists and unload your wallets to them, all while complaining about the lack of progress as measured by the products you see in advertisements.

    Luckily, when you get that cancer diagnosis or your child is born with some rare disease, that progress will attend to your needs despite your ignorance if it.




  • These cases are interesting tests of our first amendment rights. “Real” CP requires abuse of a minor, and I think we can all agree that it should be illegal. But it gets pretty messy when we are talking about depictions of abuse.

    Currently, we do not outlaw written depictions nor drawings of child sexual abuse. In my opinion, we do not ban these things partly because they are obvious fictions. But also I think we recognize that we should not be in the business of criminalizing expression, regardless of how disgusting it is. I can imagine instances where these fictional depictions could be used in a way that is criminal, such as using them to blackmail someone. In the absence of any harm, it is difficult to justify criminalizing fictional depictions of child abuse.

    So how are AI-generated depictions different? First, they are not obvious fictions. Is this enough to cross the line into criminal behavior? I think reasonable minds could disagree. Second, is there harm from these depictions? If the AI models were trained on abusive content, then yes there is harm directly tied to the generation of these images. But what if the training data did not include any abusive content, and these images really are purely depictions of imagination? Then the discussion of harms becomes pretty vague and indirect. Will these images embolden child abusers or increase demand for “real” images of abuse. Is that enough to criminalize them, or should they be treated like other fictional depictions?

    We will have some very interesting case law around AI generated content and the limits of free speech. One could argue that the AI is not a person and has no right of free speech, so any content generated by AI could be regulated in any manner. But this argument fails to acknowledge that AI is a tool for expression, similar to pen and paper.

    A big problem with AI content is that we have become accustomed to viewing photos and videos as trusted forms of truth. As we re-learn what forms of media can be trusted as “real,” we will likely change our opinions about fringe forms of AI-generated content and where it is appropriate to regulate them.



  • You seem pretty confident that LLMs cannot have an internal representation simply because you cannot imagine how that capability could emerge from their architecture. Yet we have the same fundamental problem with the human brain and have no problem asserting that humans are capable of internal representation. LLMs adhere to grammar rules, present information with a logical flow, express relationships between different concepts. Is this not evidence of, at the very least, an internal representation of grammar?

    We take in external stimuli and peform billions of operations on them. This is internal representation. An LLM takes in external stimuli and performs billions of operations on them. But the latter is incapable of internal representation?

    And I don’t buy the idea that hallucinations are evidence that there is no internal representation. We hallucinate. An internal representation does not need to be “correct” to exist.



  • How do hallucinations preclude an internal representation? Couldn’t hallucinations arise from a consistent internal representation that is not fully aligned with reality?

    I think you are misunderstanding the role of tokens in LLMs and conflating them with internal representation. Tokens are used to generate a state, similar to external stimuli. The internal representation, assuming there is one, is the manner in which the tokens are processed. You could say the same thing about human minds, that the representation is not located anywhere like a piece of data; it is the manner in which we process stimuli.



  • I think where you are going wrong here is assuming that our internal perception is not also a hallucination by your definition. It absolutely is. But our minds are embodied, thus we are able check these hallucinations against some outside stimulus. Your gripe that current LLMs are unable to do that is really a criticism of the current implementations of AI, which are trained on some data, frozen, then restricted from further learning by design. Imagine if your mind was removed from all stimulus and then tested. That is what current LLMs are, and I doubt we could expect a human mind to behave much better in such a scenario. Just look at what happens to people cut off from social stimulus; their mental capacities degrade rapidly and that is just one type of stimulus.

    Another problem with your analysis is that you expect the AI to do something that humans cannot do: cite sources without an external reference. Go ahead right now and from memory cite some source for something you know. Do not Google search, just remember where you got that knowledge. Now who is the one that cannot cite sources? The way we cite sources generally requires access to the source at that moment. Current LLMs do not have that by design. Once again, this is a gripe with implementation of a very new technology.

    The main problem I have with so many of these “AI isn’t really able to…” arguments is that no one is offering a rigorous definition of knowledge, understanding, introspection, etc in a way that can be measured and tested. Further, we just assume that humans are able to do all these things without any tests to see if we can. Don’t even get me started on the free will vs illusory free will debate that remains unsettled after centuries. But the crux of many of these arguments is the assumption that humans can do it and are somehow uniquely able to do it. We had these same debates about levels of intelligence in animals long ago, and we found that there really isn’t any intelligent capability that is uniquely human.