𝕽𝖚𝖆𝖎𝖉𝖍𝖗𝖎𝖌𝖍

       🅸 🅰🅼 🆃🅷🅴 🅻🅰🆆. 
 𝕽𝖚𝖆𝖎𝖉𝖍𝖗𝖎𝖌𝖍 𝖋𝖊𝖆𝖙𝖍𝖊𝖗𝖘𝖙𝖔𝖓𝖊𝖍𝖆𝖚𝖌𝖍 

Ceterum Lemmi necessitates reactiones

  • 0 Posts
  • 31 Comments
Joined 3 years ago
cake
Cake day: August 26th, 2022

help-circle

  • I did some light reading. I see claims that wear leveling only ever writes only to zeroed sectors. Let me get this straight:

    If I have a 1TB ssd, and I write 1TB of SecretData, and then I delete and write 1TB of garbage to the disk, it’s not actually holding 2TB of data, with the SecretData hidden underneath wear leveling? That’s the claim? And if I overwrite that with another 1TB of garbage it’s holding, what now, 3TB of data? Each data sequence hidden somehow by the magic of wear leveling?

    Skeptical Ruaraidh is skeptical. Wear leveling ensures data on an SSD is written to free sectors with the lowest write count. It can’t possibly be retaining data if data the maximum size of the device is written to it.

    I see a popular comment on SO saying you can’t trust dd on SSDs, and I challenge that: in this case, wiping an entire disk by dumping /dev/random must clean the SSD of all other data. Otherwise, someone’s invented the storage version of a perpetual motion device. To be safe, sync and read it, and maybe dumb again, but I really can’t see how an SSD world hold more data than it can.

    dd if=/dev/random of=/dev/sdX bs=2048 count=524288

    If you’re clever enough to be using zsh as your shell:

    repeat 3 (dd if=/dev/random of=/dev/sdX bs=2048 count=524288 ; sync ; dd if=/dev/sdX ba=2048)

    You reduce every single cell’s write lifespan by 2 times; with modern life spans of 3,000-100,000 writes per cell, it’s not significant.

    Someone mentioned blkdiscard. If you really aren’t concerned about forensic analysis, this is probably the fastest and least impactful answer: it won’t affect cell lives by even a measly 2 writes. But it also doesn’t actually remove the data, it just tells the SSD that those cells are free and empty. Probably really hard to reconstruct data from that, but also probably not impossible. dd is a shredding option: safer, slower, and with a tiny impact on drive lifespan.


  • Educate me.

    My response would normally be: dd if=/dev/random of=/dev/sdX ba=1024M, followed by a sync. Lowest common denominator nearly always wins in my book over specialty programs that aren’t part of minimal core; tools that also happen to be in BusyBox are the best.

    What makes this situation special enough for something more complex than dd? Do SSDs not actually save the data you tell them to? I’m trying to guess at how writing a disk’s worth of garbage directly to the device would fail. I’m imagining some holographic effect, where you can actually store more data than the drive holds. A persistent, on disk cache that has no way of directly affecting, but which can somehow be read later and can hold latent data?

    If I were really, I’d dd, read the entire disk to /dev/null, then dd again. How would this not be sufficient?

    I’m honestly trying to figure out what the catch is, here, and why this was even a question - OP doesn’t sound like a novice.


  • I have to put in a plug for herbstluftwm.

    It really depends on whether you like the keyboard and tiling widow managers, or if you like dragging windows around and resizing them. Tiling widow managers are popular, but they’re definitely a taste.

    hlwm and bspwm are a - “configurationless” breed - I think river on Wayland is the same. This has become my one requirement for a window manager. Every configuration is done through a command line client call, and it’s game changing. The “configuration” is just a specific shell script hlwm runs when it starts up, and it’s full of whatever client calls needed to configure the system. Every call in that script can be run outside the script; it’s literally a just shell script. I run all sorts of things in that script: launching “desktoppy” programs like kanata, setx, autostart programs that start on a specific screen; one script lays out one screen in a complex 2x1 layout where each pane is tabbed and contains three terminals each, and then launches terminals that connect to various remote computers - that’s my “remote server” screen, and it’s all set up when I log in.

    However - definitely for tiling enthusiasts. I used i3 for a decade before I found bspwm, which converted me to configurationless WMs, and I ended up with hlwm. It’s honestly what’s preventing me from giving Wayland a serious go, although river might do the trick.


  • Edit I haven’t tried this myself, but from what I can find the gparted part is not necessary. You can get rid of Windows and re-use it for Linux with a single command: btrfs device add / /dev/old_windows_partition. The rest of the considerations below still apply.

    The answer to the question you asked is: make sure you know which partition it is and run dd if=/dev/random of=/dev/<partition> bs=1024. Then you’ll probably want to find which boot loader you’re using and remove the Windows option. That will delete Windows.

    To re-use the free space, which most folks are focusing on, might be far easier than all of the other comments.

    Odds are decent that you’re using btrfs. Most reasonable Linux distros default to it, so unless you changed it, it’s probably btrfs. With btrfs, you can simply change the position type and add it to your existing filesystem.

    1. Use the program gparted. You can do all of this on the command line with fdisk, but gparted is a GUI program and is easier if you’re more comfortable with GUIs. Find the Windows partition, make sure you now it’s the Windows partition and not the boot partition (the boot partition will be the really tiny one), click on the Windows partition and choose the “change partition type” function to switch it to a Linux partition. There will be warnings; heed them, double check, and then save and exit.
    2. Add the old Windows partition to your existing filesystem with: btrfs device add / /dev/sdx2 . This adds the partition /dev/sdx2 to the filesystem mounted at / – your root partition. Replace /dev/sdx2 with whatever partition Windows used to be on.

    That’s it. Now your Linux filesystem is using the old Windows partition. Without changing the boot options, when you reboot your system may still believe there’s a Windows to boot into. If you’re using EFI, it should just disappear, but with grub you’ll have to tell grub that Windows isn’t there anymore or else it’ll keep offering it to you at each boot.

    You are almost certainly not using RAID, so you don’t need to worry about rebalancing.

    Summary: it is very likely your distribution used btrfs for your Linux partition. In that case, the absolute easiest way to get rid of Windows and use it for Linux is to add the partition to your btrfs filesystem. No reformatting, repartitioning, reinstalling; just tell btrfs to use it and you’re done.


  • I hear your frustration. It can be annoying. There’s a reason for it, and that’s because environment variables are limited in their use by scoping: they’re only inherited from the parent to children, and they’re pass-by-value. This means that, from a child process, you can’t influence the variables for any other sibling, or the parent. There’s no way to propagate environment variables to anyone except new children you fork.

    This is a significant limitation. There is no work around using only environment variables. It’s a large part of why applications store scalar values in files: it’s (almost) the only environmentally agnostic way to propagate information to other processes.

    Herbstluftwm has herbstclient getenv and setenv, because ostensibly every user process is a child of the window manager, and it’s a convenient way to communicate scalar changes between processes. tmux has similar commands; in both cases, the source of truth is the parent application, not the environment. gsettings is just Gnome’s version; KDE has it’s own version. I’d be surprised if Sway didn’t.

    Environment variables are great, but they’re limited, and they are simply unsuitable for purposes. They’re also insecure: anyone with the right permissions can read them from /proc. The consequence is that it can be difficult to track down where settings are stored, especially if you’re still using some component of a desktop, which tend to manage all of their settings internally.

    We do have a global solution for Linux: the kernel keyring. It’s secure, and global. It is not, however, automatically persisted, although desktops could easily persist and restore values from the keyring when they shut down or start up. Every desktop I know just keeps it’s own version of what’s essentially the Windows registry.

    It’s a mess.








  • The whole thing about selling DVDs was that you were selling the DVD, not the distribution on it. You were “charging a reasonable price for the service of burning the DVD, for the media, and for distribution.” Much of that went away with the internet, when people could download and burn ISOs themselves. It used to be quite common; not just distributions, but CDs full of OSS software. Again, the assumption and expectation was that you weren’t selling the software, but the media. There was no such thing as a “Pro” version of Linux. There were commercial distributions, and there was a period when companies were trying to figure out ways to commoditize OSS, but there were also lawsuits, and it mostly settled out to be service agreements, which were in the end more lucrative anyway.

    I disagree about the immorality of selling FOSS. Even in the very rare case that you built the entire program, from scratch, using no FOSS libraries, you probably still used gcc, or the Python interpreter, or go or rustc. And on most cases, you are using libraries that other people created and gave away for free. And instead of giving back to the community, so that the people who’s software you’re implicitly selling that your software is built and depends on, can’t use it similarly for free. And odds are also good that, despite your shim is utterly reliant on their hard work, you’re not splitting up the profit and sharing it with them. How much money do those people send to Linus Torvalds? Or the countless kernel contributors? To the people who’ve worked on libc?

    I have absolutely no issue with people who request donations for the software that they built and regularly and consistently maintain. And people charging for OSX or Windows software? It costs more than just free time to develop and release on those platforms - the entire chain is commercial. But when your product is an unmeasurably tiny fraction of all of the gratis effort that went into the end product, well. It doesn’t seem right to profit on other’s work, does it?

    Look, we’re a capitalist society. It takes someone time and material to make a chair from scratch, and when you take it, they don’t have it any more. They used nothing free except maybe YouTube videos, or their parent’s training. The FOSS software ecosystem is the closest thing we have to a functioning communism in the world; it works because, while it may take my time to create something, it doesn’t cost me more than my time, and once it’s done it can be endlessly replicated and used by innumerable people at no significant cost to me. When actors take advantage of the free ecosystem and don’t contribute back in like fashion, in my book that’s unethical.


  • It’s kind of creepy, weird, and unusual. However, I can see someone building a distro and creating a bunch of non-OSS themes and basically selling the themes. That’s not beyond the pale, although it is (again) questionably moral given that they probably created all of those themes using free software that they didn’t pay for. But whatever… just keep that in mind.

    The usual way of commoditizing Linux is to sell service - so you get, like, 4 tickets a month or something where someone is guaranteed to be there to try to fix whatever problem you have within a reasonable amount of time, and you don’t have to either rely on the kindness of strangers.

    Zorin is mainstream enough that I suspect if they were really violating the GPL, someone would be on their case already. You can’t - usually, depending on the license - just repackage OSS and sell it. So if that’s how you wanted to spend your money, you do you and don’t worry about the comments on this thread.

    Oh, about your question: according to the upgrade page, the “upgrade” is just access to more packages, probably in another repos. You won’t have to re-install the distribution, and there won’t be any impact on your dual boot. You’re just getting more packages to install.


  • Yeah, but the implementations are really sparse. JQuery sucked all of the air out of the room.

    I much prefer JSONPath, although it’s a little rough in some areas where JSON’s design doesn’t align 1:1 with XML and XPath.

    Do you use it? Is there a good CLI tool for it? Every few years I get mighty sick of jq and go looking for an alternative, and I haven’t found a good JSONPath implementation yet.



  • I have used todo.txt for, shit, over a decade now. Jesus. Anyway, I just sync files with whatever - in oelden days rsync, nowadays SyncThing. But I’ve occasionally speculated about syncing with VTODO instead.

    Whenever I start to think through it, I eventually come to the same conclusion: it seems out of place, and more fussy than just copying a file via SyncThing or even just WebDAV put-ting a file. I guess the value would be conflict resolution?

    If I have one criticism of SyncThing, it’s that there’s absolutely no facility for conflict resolution, even after all these years, there’s no way to configure a client to say, “if you get a conflict on a .txt file, try running ‘automerge’. If it exits with an error, leave it a conflict. If it exits with success, sync it resolved.” There are merge tools for a variety of file types, from txt to ODF to json. It’d be an almost trivial feature to add, and it’s frustrating that it’s still missing.


  • Ak-shually… you’re completely right!

    But you left out an important option for OP: they can just turn on auto-login and bypass the login screen entirely. If they want any security, they’ll need a display manager, but maybe they don’t care. Also, while this doesn’t apply to them, I discovered accidentally that after I log in to herbstluftwm, it goes directly to screen lock. I don’t know what I did to make that happen, but I’ve realized I can just disable the display manager, have auto-login, and still get security. Probably not as much, and if I ever get around to encrypting home that won’t work anymore, but I’ve been considering doing it because typing my password in twice is a drag.



  • I’ve done the Arch to Artix. It wasn’t hard, per se, but it took a while. I think that should be Medium, because Artix isn’t just an Arch derivative.

    In fact, might I suggest a different way of looking at the difficulties?

    • Replacing the package manager: Hard.
    • Replacing the package manager without a live USB: Extreme.
    • Going from a basic systemd-based distro (init, log, cron) to anything else: Hard
    • Going from a systemd distro that’s bought into the entire systemd stack, including home and boot: Extreme
    • Going from one init to another: Medium
    • Changing boot systems: grub to UEFI, for example: Easy.
    • Replacing all GNU tools with other things: Extreme (mainly because of script expectations).

    And so on. You get 1 point for Easy, 2 for Medium, 4 for Hard, and 8 for Extreme. Add 'em up, go for a high score.

    I don’t think rolling your own is that hard, TBH, unless you’re expected to also build a package manager. If maintaining it would be harder than building it.