• 0 Posts
  • 39 Comments
Joined 1 year ago
cake
Cake day: March 8th, 2024

help-circle
  • Well, there are a couple of caveats to that. One is that it’s far from the first time an emulator has been taken down for similar reasons and it’s historically been pretty ineffective in the grand scheme, especially when alternative forks are available. “Far reaching consequences” is a bit of an overstatement, at least for those of us that went down into the Bleem! mines back in the day. There is a chance that you may be connecting things that aren’t that directly connected here.

    The second is that you’re still misrepresenting people not acting out their annoyance the way you’d like with people not being annoyed. I’m not here defending Nintendo, this sucks. I’m here saying that I don’t want to shame Nintendo into the same awkward gray area Google as an intermediary and every other IP holder currently inhabits, I want actually effective regulation that protects legitimate content creators from IP abuse, including from predatory corporations. You are looking to perform outrage in a room of like-minded people, and I get that you want to vent, but it’s not particularly useful to get mad at people that agree with you for not being in your same emotional level while they do.



  • I did not claim that creating an emulator is illegal. You don’t sue people for a crime, either. “Illegal” and “criminal” are different concepts, and making an emulator without tapping into proprietary assets is neither.

    We don’t know what Nintendo used to threaten Ryujinx, so we don’t know how likely it is that they would have won. We do know the Yuzu guys messed up and gave them a better shot than in the other times they have failed at this exact play.

    You are very mad at an argument nobody is making.


  • They are absolutely within their rights to approach the developers of Ryujinx and threaten to sue them. Based on how things have worked so far they’d lose, and I agreee with you that the inequality in that interaction is terrible and should be addressed.

    On the Yuzu scenario it’s more relevant, because of the specific proprietary elements found in the emulator.

    And then there’s Nintendo targeting emulation-based handhelds and streamers for featuring emulated footage of their first party games on Youtube videos, which falls directly under the mess that is copyright enforcement under Youtube and other social platforms.

    In all of those cases, a clearer, more rules-based organization of IP that explicitly covers these scenarios would have helped people defend against Nintendo’s overreach, or at least have a clearer picture of what they can do about it. We can’t go on forever relying on custom, subjective judicial interpretation and non-enforcement. We’re way overdue on a rules-based agreement of what can and can’t be done with media online.

    The worst part is… we kinda know. There is a custom-based baseline for it we’ve slowly acquired over time. It’s just not properly codified, it exists in EULAs and unspoken, unenforceable practices. It’s an amazing gap in what is a ridiculously massive cultural and economic segment. It’s crazy that we’re running on “do you feel lucky?” when it comes to deciding if a corporation claiming you can’t do a thing on the Internet that involves media. We need to know what we’re allowed to do so we can say “no” when predatory corporations like Nintendo show up to enforce rights they don’t have or shouldn’t have.


  • Yeeeah, Nintendo sucks.

    And it sucks that, despite this not killing the distribution of Yuzu or Ryujinx forks it does make them less safe and reliable for users, as well as hindering ongoing development.

    Ultimately, though, Nintendo is acting within their rights. Which is not an endorsement, it’s proof that modern copyright frameworks are broken and unfit for purpose in an online world. We need a refoundation of IP. Not to make everything freely accessible, necessarily, but to make it make sense online instead of having to rely on voluntary non-enforcement. I don’t care if it’s Youtube or emulation development, you should know if your project is legal and safe before you have lawyers showing up at your door with offers you can’t refuse.


  • He shipped enough clunkers (and terrible design decisions) that I never bought the mythification of Jobs.

    In any case, the Deck is a different beast. For one, it’s the second attempt. Remember Steam Machines? But also, it’s very much an iteration on pre-existing products where its biggest asset is pushing having an endless budget and first party control of the platform to use scale for a pricing advantage.

    It does prove that the system itself is not the problem, in case we hadn’t picked up on that with Android and ChromeOS. The issue is having a do-everything free system where some of the do-everything requires you to intervene. That’s not how most people use Windows (or Android, or ChromeOS), and it’s definitely not how you use any part of SteamOS unless you want to tinker past the official support, either. That’s the big lesson, I think. Valve isn’t even trying to push Linux, beyond their Microsoft blood feud. As with Google, it’s just a convenient stepping stone in their product design.

    What the mainline Linux developer community can learn from it, IMO, is that for onboarding coupling the software and hardware very closely is important and Linux should find a way to do that on more product categories, even if it is by partnering with manufacturers that won’t do it themselves.



  • I genuinely think Linux misses a beat by not having a widely available distro that is a) very closely tied to specific hardware and b) mostly focused on web browsing and media watching. It’s kinda nuts and a knock on Linux devs that Google is running away with that segment through both Android and ChromeOS. My parents aren’t on Windows anymore but for convenience purposes the device that does that for them is a Samsung tablet.


  • I keep trying to explain how Linux advocacy gets the challenges of mainstream Linux usage wrong and, while I appreciate the fresh take here, I’m afraid that’s still the case.

    Effectively this guide is: lightly compromise your Windows experience for a while until you’re ready, followed by “here’s a bunch of alien concepts you don’t know or care about and actively disprove the idea that it’s all about the app alternatives.”

    I understand why this doesn’t read that way to the “community”, but parse it as an outsider for a moment. What’s a snap? Why are they bad? Why would I hate updates? Aren’t updates automatic as they are in Windows? Why would I ever pick the hardware-incompatible distros? What’s the tradeoff supposed to be, does that imply there is a downside to Mint over Ubuntu? It sure feels like I need to think about this picking a distro thing a lot more than the headline suggested. Also, what’s a DE and how is that different to a distro? Did they just say I need a virtual machine to test these DE things before I can find one that works? WTF is that about?

    Look, I keep trying to articulate the key misunderstanding and it’s genuinely hard. I think the best way to put it is that all these “switch to Linux, it’s fun!” guides are all trying to onboard users to a world of fun tinkering as a hobby. And that’s great, it IS fun to tinker as a hobby, to some people. But that’s not the reason people use Windows.

    If you’re on Windows and mildly frustrated about whatever MS is doing that week, the thing you want is a one button install that does everything for you, works first time and requires zero tinkering in the first place. App substitutes are whatever, UI changes and different choices in different DEs are trivial to adapt to (honestly, it’s all mostly Windows-like or Mac-like, clearly normies don’t particularly struggle with that). But if you’re out there introducing even a hint of arguments about multiple technical choices, competing standards for app packages or VMs being used to test out different desktop environments you’re kinda missing the point of what’s keeping the average user from stepping away from their mainstream commercial OS.

    In fairness, this isn’t the guide’s fault, it’s all intrinsic to the Linux desktop ecosystem. It IS more cumbersome and convoluted from that perspective. If you ask me, the real advice I would have for a Windows user that wants to consider swapping would be: get a device that comes with a dedicated Linux setup out of the box. Seriously, go get a Steam Deck, go get a System76 laptop, a Raspberry Pi or whatever else you can find out there that has some flavor of Linux built specifically for it and use that for a bit. That bypasses 100% of this crap and just works out of the box, the way Android or ChromeOS work out of the box. You’ll get to know whether that’s for you much quicker, more organically and with much less of a hassle that way… at the cost of needing new hardware. But hey, on the plus side, new hardware!


  • Yeah, on that I’m gonna say it’s unnecessary. I don’t know what “integration with the desktop” gets you that you can’t get from having a web app open or a separate window open. If you need some multimodal goodness you can just take a screenshot and paste it in.

    I’d be more concerned about model performance and having a well integrated multimodal assistant that can do image generation, image analysis and text all at once. We have individual models but nothing like that that is open and free, that I know of.



  • That is a stretch. If you try to download and host a local model, which is fairly easy to do these days, the text input and output may be semi-random, but you definitely have control over how to plug it into any other software.

    I, for one, think that fuzzy, imprecise outputs have lots of valid uses. I don’t use LLMs to search for factual data, but they’re great to remind you of names of things you know but have forgotten, or provide verifiable context to things you have heard but don’t fully understand. That type of stuff.

    I think the AI shills have done a great disservice by presenting this stuff as a search killer or a human replacement for tasks, which it is not, but there’s a difference between not being the next Google and being useless. So no, Apple and MS, I don’t want it monitoring everything I do at all times and becoming my primary interface… but I don’t mind a little search window where I can go “hey, what was that movie from the 50s about the two old ladies that were serial killers? Was that Cary Grant or Jimmy Stewart?”.



  • Yeah, for sure. If you just drop Ubuntu or Fedora or whatever on a machine where everything works for you out of the box the experience is not hard to wrap your head around. Even if one thing needs you to write something in a terminal following a tutorial, that’s also frequent in Windows troubleshooting.

    The problem is that all those conversations about concurrent standards for desktop environments, display protocols, software distribution methods and whatnot are hard to grasp across the board. If and when you hit an issue that requires wrapping your head around those that’s where the familiarity with Winddows’ messy-but-straightforward approach becomes relevant.

    In my experience it’s not going through the motions while everything works or using the system itself, it’s the first time you try to go off the guardrails or you encounter a technical issue. At that point is when the hidden complexity becomes noticeable again. Not because the commands are text, but because the underlying concepts are complex and have deep interdependencies that don’t map well to other systems and are full of caveats and little differences depending on what combination of desktop and distro you’re trying to use.

    That’s the speed bump. It really, really isn’t the terminal.


  • Well, the good news is that of course you can use Linux with only as much command line interaction as you get in Windows.

    The bad news is that the command line REALLY isn’t what’s keeping people away from Linux.

    Hell, in that whole list, the most discouraging thing for a new user isn’t the actually fairly simple and straightforward terminal commands, it’s this:

    Here’s where it gets a little trickier: Scrolling on Firefox is rough, cause the preinstalled old version doesn’t have Wayland support enabled. So you either have to enable Wayland support or install the Flatpak version of Firefox.

    This is a completely inscrutable sentence. It is a ridiculous notion, it brings up so many questions and answers none. It relates to concepts that have no direct equivalent in other platforms and even a new user that successfully follows this post and gets everything working would come out the other end without understanding why they had to do what they did or what the alternative was.

    I’ve been saying it for literal decades.

    It’s not the terminal, it’s not the UX not looking like Windows.



  • Local and secure image recognition is fairly trivial in terms of power consumption, but hey, there’s likely going to be some option to turn it off, just like hardware acceleration for video and image rendering, which uses the same GPU in similar ways. The power consumption argument is not invalid, but the way people deploy it is baffling to me, and is often based on worst-case estimates that are not realistic by design.

    To be clear, Apple is building CPUs that can parse these queries in seconds into iPads now, running at a few tens of watts. Each time I boot up Tekken on my 1000W gaming PC for five minutes I’m burning up more power than my share of AI queries for weeks, if not months.

    On the second point I absolutely disagree. There is no practical advantage to making accessibility annoying to implement. Accessibility should be structural, mandatory and automatic, not a nice thing people do for you. Eff that.

    As for the third part, every alt text I’ve seen deployed is not adding much of value beyond a description of the content. What is measurable and factual is that the coverage of alt-text, even in places where it’s disproportionately popular like Mastodon, is spotty at best and residual at worst. There is no question that automated alt-text is better than no alt-text, and most content has no alt-text.

    That is only the tip of the iceberg for ML applied to accessibility, too. You could do active queries, you could have users be able to ask for additional context or clarification, you could have much smoother, automated voice reading of text, including visual description on demand… This tech is powerful in many areas, and this is clearly one. In fact, this is a much better application than search, by a lot. It’s frustrating that search and factual queries, where this stuff is pretty bad at being reliable, are the thing everybody is thinking about.