• 0 Posts
  • 82 Comments
Joined 11 months ago
cake
Cake day: July 22nd, 2023

help-circle
  • Compiled shaders are unique to every GPU model and often driver revision. The console versions don’t studder because they all have identical hardware, so compiled shaders can be shipped with the game.

    Steam will eventually download a shader cache specific to your hardware, otherwise if you jump straight into a new game on PC, the game is going to have to compile them during gameplay, or make you wait 30 minutes to play while they compile (similar to how a lot of emulators for modern consoles like the Switch make you wait). And since nobody wants to launch a newly downloaded game just to sit at a boring 30 minute loading screen, they do their best on the fly.

    This isn’t about defending Fromsoft, they’re just another company trying to get your money. I’m just saying that’s how PCs work, and new games with complex shaders are probably pick being accused of having performance issues at launch than hitting players who are expecting to launch a game and play right away with a long loading screen (that a patent prevents them from putting a mini game on while you wait).


  • From what I saw the negative reviews were split between complaints about difficulty, and performance complaints. On the performance front it looked to me to mostly be shader compilation studders, which is relatively common with most new games.

    Difficulty wise, yeah, it’s hard. That’s a big part of the appeal of Fromsoft games. They have made some adjustments since launch to bring the difficulty down a bit, but it’s probably better that they launched a game that is “too hard” and patching the difficulty down, than releasing something that everyone can steamroll through in a day and getting complaints that it was too easy. The game also rewards exploration, and if you just try to rush the bosses without exploring you’ll make things much harder on yourself.








  • I remember seeing an interview with the model, who at the time of the interview was in her 70s or 80s, she apparently wasn’t enthusiastic about having become a common test image. But since she had technically consented to be in Playboy (which was only a magazine at the time), there wasn’t anything she could do to stop it. I think in this case it’s probably best to stop using her image specifically, as it does kinda get into a weird messy situation of consent, and how her consent to be in a magazine morphed through technology into something more “permanent” than she originally realized. There are plenty of other models who would absolutely be down for that, and given enough time, knowing how nerds are, there will be other test images of women. But I think it’s probably for the best that this one gets retired from this use.

    And yes, there are people who have tried to use this instance as a “there shouldn’t be images of attractive/implied nude women a standard test images, because it can cause body image issues for women who go into that field.” Which on one hand, I can see where they’re coming from, but also people take pictures of people, and some people do look better than most of us, having more diverse test images would be a good thing, because we don’t all look like that. But some do, and they’re probably going to get more pictures taken of them than the rest if us.


  • Not sure exactly how good this would work for your use case of all traffic, but I use autossh and ssh reverse tunneling to forward a few local ports/services from my local machine to my VPS, where I can then proxy those ports in nginx or apache on the VPS. It might take a bit of extra configuration to go this route, but it’s been reliable for years for me. Wireguard is probably the “newer, right way” to do what I’m doing, but personally I find using ssh tunnels a bit simpler to wrap my head around and manage.

    Technically wireguard would have a touch less latency, but most of the latency will be due to the round trip distance between you and your VPS and the difference in protocols is comparatively negligible.





  • I think that my skepticism and desire to have docker get out of my way, has more to do with already knowing the underlying mechanics, being used to managing services before docker was a thing, and then docker coming along and saying “just learn docker instead.” Which is fine, if it didn’t mean not only an entire shift from what I already know, but a separation from it, with extra networking and docker configuration to fuss with. If I wasn’t already used to managing servers pre-docker, then yeah, I totally get it.


  • I’ll probably make the jump when Plasma 6.1 releases with their “real, fake session restore” functionality, was hoping that would make it in to Plasma 6, and I am daily driving Wayland on my laptop now, but I kinda need my programs (or at least file managers and terminal windows) to re-open the way they were between reboots.

    Thanks to kscreen-doctor, I’ve been able to port most of my desktop scripts that I use for managing my multiple monitors to work on Wayland, and krdc/krfb have been a decent enough replacement for x11vnc or x2go for accessing the desktop on my home server/NAS remotely (I know, desktops on servers are considered sacrilege, but for me it’s been useful too many times to get rid of at this point).

    Where Wayland currently shines for me is VR, Steam VR works better, and more consistently on Plasma Wayland than X11 at this point, which is probably more of a Valve thing than a Wayland thing. When I first got my Index, X11 worked fine, but there have been times when Steam VR on Linux being “broken” has made the news on Phoronix/Gaming on Linux, but still worked fine on Plasma Wayland (which seems to be where Valve is doing most of their SteamVR Linux testing as of late).

    As an end user, I do wish that the Wayland specification was organized better, because as an outsider, it seems a lot of the bickering that goes on has more to do with everyone having different end goals. I think if they would split out the different styles of window management to have their own sub-specs or extensions and then figure out what of that could be moved into the core after everyone has built what they need would be better than their current approach of compromising their way through every little decision that doesn’t always make sense for every use case. Work together when it makes sense, but understand that there are times when that doesn’t make sense, and sometimes you can’t please every stick in the mud, and are going to have to do your own thing without them. I do get the appeal of doing things right the first time too though, even if it takes more time. But it seems like usability is always the thing that gets sacrificed when compromises are made.


  • That’s a big reason I actively avoid docker on my servers, I don’t like running a dozen instances of my database software, and considering how much work it would take to go through and configure each docker container to use an external database, to me it’s just as easy to learn to configure each piece of software for yourself and know what’s going on under the hood, rather than relying on a bunch of defaults made by whoever made the docker image.

    I hope a good amount of my issues with docker have been solved since I last seriously tried to use docker (which was back when they were literally giving away free tee shirts to get people to try it). But the times I’ve peeked at it since, to me it seems that docker gets in the way more often than it solves problems.

    I don’t mean to yuck other people’s yum though, so if you like docker, and it works for you, don’t let me stop you from enjoying it. I just can’t justify the overhead for myself (both at the system resource level, and personal time level of inserting an additional layer of configuration between me and my software).



  • In general, yes more tabs = more RAM used, but Firefox does have a neat trick compared to Chrome that helps lower memory usage for those of us with hundreds of tabs. When you launch Chrome with a bunch of tabs open from a previous session, it actually loads them all into RAM at launch, with Firefox, it doesn’t actually load the pages of tabs from previous sessions, until you switch to them. The page titles and icons get loaded into RAM, obviously, but if you have lots of old tabs that you almost never open, the memory usage impact of lots of tabs is minimized.