• 4 Posts
  • 68 Comments
Joined 5 months ago
cake
Cake day: January 20th, 2024

help-circle



  • I like that the writer thought re climate change. I think it’s been 1 of the biggest global issues for a long time. I hope there’ll be increasing use of sustainable energy for not just data centers but the whole tech world in the coming years.

    I think a digital waiter doesn’t need a rendered human face. We have food ordering kiosks. Those aren’t ai. I think those suffice. A self-checkout grocer kiosk doesn’t need a face too.

    I think “client help” is where ai can at least aid. Imagine a firm that’s been operating for decades and encountered so many kinds of client complaints. It can feed all those data to a large language model. With that model responding to most of the client complaints, the firm can reduce the number of their client support people. The model will pass the complaints that are so complex or that it doesn’t know how to address to the client support people. The model will handle the easy and medium complaints; the client support people will handle the rest.

    Idk whether the government or the public should stop ai from taking human jobs or let it. I’m torn. Optimistically, workers can find new jobs. But we should imagine that at least 1 human will be fired and can’t find a new job. He’ll be jobless for months. He’ll have an epic headache as he can’t pay next month’s bills.









  • The article is too long for me. 2 of its main ideas are “Everyone using large-language models should be aware of ai hallucination and be careful when asking those models for facts.” and “Firms that develop large-language models shouldn’t downplay the hallucination and shouldn’t force ai in every corner of tech.”

    There was already so much misinformation on the Web before Chatgpt 3.5. There’s still so much misinformation. No need for the hallucination to worsen the situation. We need a reliable source of facts. Optimistically, Google, Openai or Anthropic will find a way to reduce or eradicate the hallucination. The Google ceo said they were making progress. Maybe true. Or maybe generic pr lie so folks would stop following up re the hallucination.






  • I guess Altman thought “The ai race comes 1st. If Openai will lose the race, there’ll be nothing to be safe about.” But Openai is rich. They can afford to devote a portion of their resources to safety research.

    What if he thinks that the improvement of ai won’t be exponential? What if he thinks that it’ll be slow enough that Openai can start focusing on ai safety when they can see superintelligence’s approach from the distance? That focusing on safety now is premature? That surely is a difference in opinion compared to Sutskever and Leike.

    I think ai safety is key. I won’t be :o if Sutskever and Leike will go to Google or Anthropic.

    I was curious whether or not Google and Anthropic have ai safety initiatives. Did a quick search and saw this –

    For Anthropic, my quick search yielded none.


  • Found your comment by searching dune 2 in this community.

    Dune 1 and 2 were just OK for me. I don’t have many negative things to say re the 2 flicks. Maybe it just wasn’t my cup of tea.

    I <3 the audio for the Voice.

    I appreciate Villeneuve. Aside from Dune 1 and 2, I viewed Sicario, Arrival and Blade runner 2049. None of those is a masterpiece for me, but Blade runner 2049 is his best.





  • Huge blow to Huawei, which is a firm I dislike.

    It seems they can’t buy x86-architecture processors from Intel and Amd. How can they make x86 💻?

    If Windows arm will succeed in the far future (this is a big if for me), I wonder if Huawei can buy Mediatek chips for Windows arm 💻. I did a quick search. It seems Mediatek wanna design arm chips for 💻.

    A Reuters article said Qualcomm licensed their 5g tech to Huawei. I guess not being able to buy from Qualcomm isn’t a big issue for Huawei. Huawei has their own 5g tech.