• 0 Posts
  • 27 Comments
Joined 2 years ago
cake
Cake day: June 9th, 2023

help-circle

  • When you get a moment, you could try switching over to the tty again, login to the shell, and then try typing in the command btop (which I think is the Bazzite specific version of the default “top” command, and should be installed by default). Top is basically a task manager, and you can see what programs are running (and taking up resources) right there in the terminal. If your system freezes up, you can often unfreeze it by killing the unresponsive programs. It’s probably useful to familiarise yourself with that interface before you need it.


  • Echoing Jubilant Jaguar’s sentiment about defaults mattering, I think that sometimes an excess amount of choice can be overwhelming such that a user is less empowered to make choices about things they do care about (Leading to a less steep learning curve). Sensible defaults need not remove anyone’s choice



  • I dual boot Fedora and Arch. Fedora was just a fluke because it seemed like one of the most mainstream distros, and I was a Linux noob.

    I liked Arch though because the Arch wiki is so useful for a beginner to learn from, even if you’re not on Arch. At first, Arch seemed too complex and difficult for me, as a beginner, but when I kept finding myself at the Arch wiki when troubleshooting, I realised how powerful good documentation is. I installed Arch with a “fixer-upper” type mindset, with the goal of using the greater power and customisability that Arch offers to build a config/setup that worked for me (learning all the while). It was a good challenge for someone who is mad, but not quite so mad as to dive into Gentoo or Linux From Scratch


  • (n.b. I am neither a rust, nor C developer so I am writing outside my own direct experience)

    One of the arguments brought up on the kernel.org thread was that if there were changes to the C side of the API, how would this avoid breaking all the rust bindings? The reply to this was that like with any big change in the Linux kernel that affects multiple systems with multiple different teams involved, that it would require a coordinated and collaborative approach — i.e. it’s not like the rust side of things would only start working on responding to a breaking change once that change has broken the rust bindings. This response (and many of the responses to it) seemed reasonable to me.

    However, in order for that collaboration to work, there are going to have to be C developers speaking to rust developers, because the rust developers who need to repair the bindings will need to understand some of what’s being proposed, and thus they’ll need to understand some level of C, and vice versa. So in practice, it seems nigh on impossible for the long term, ongoing maintenance of this code to be entirely a task for the rust devs (but I think this is taking an abnormally flexible reading of “maintenance” — communicating with other people is just part and parcel of working on such a huge project, imo)

    Some people have an ideological opposition to there being two different programming languages in the Linux kernel full stop. This is part of why the main thing that rust has been used for so far are drivers, which are fairly self enclosed. Christoph Hellwig even used the word “cancer” to describe a slow creep towards a codebase of two languages. I get the sense that in his view, this change that’s being proposed could be the beginning of the end if it leads to continued prevalence of rust in Linux.

    I haven’t written enough production code to have much of an opinion, but my impression is that people who are concerned are valid (because I do have more than enough experience with messy, fragmented codebases), but that their opposition is too strong. A framework that comes to mind is how risk assessments (like are done for scientific research) outline risks that often cannot be fully eliminated but can be reduced and mitigated via discussing them in the context of a risk assessment. Using rust in Linux at all hasn’t been a decision taken lightly, and further use of it would need ongoing participation from multiple relevant parties, but that’s just the price of progress sometimes.




  • When I first started using Obsidian, I used folders too much because I felt like things were “messy” if not tidied away. I already knew that one of the weaknesses of hierarchical folder systems is how it can make having an overview of the system harder, but it took a while for me to properly understand that.

    As you say, it’s necessary to be proactive with making links to things. I found that when I used Obsidian for journalling, I started to put square brackets around loads of stuff, because the inactive links didn’t do me any harm, but they did highlight what might be useful as active pages. Something I picked up from the Zettelkasten crowd was occasionally having a “Map of Content” page, where I used it as an index of topical links. It always worked best when I allowed them to arise naturally, as needed. Once I got the trick of this, I found I was able to find things far more easily, because I was able to navigate via the links.

    Tags are a tricky one to use. I never found them useful as a primary organisation method — they were worse than both hierarchical folders and link based organising in that respect. They were super useful as an augmentation to my organisation though, especially when I used them sparingly.

    This is all an overlong way of saying that yes, I agree with you, using systems like Obsidian do require a switch in how you think in order to best use them. Something that I always enjoy pondering is whether pushing ourselves out of our comfort zone is something that’s inherently good — something something cognitive flexibility? I don’t know, but I enjoy endeavours of this sort nonetheless



  • I’m answering a different question than the one you’re asking, but I switched to Linux (specifically Fedora) as my main computer not too long ago. I had been trying to improve at Linux because I work in scientific research, but I was anxious because games seemed far…messier and complex than the scientific stuff I was more familiar with, and I didn’t want to kill my recreation. This worry was unnecessary, because I have been immensely impressed by how straightforward playing Steam games through Proton (the windows emulation thingy that Steam uses). There have been a couple of minor issues that were easy to troubleshoot, and it was the kind of problem that sometimes crops up on Windows too.

    I still feel quite overwhelmed by Linux, because I still don’t really understand why some things work on one operating system and not another. Like, I understand that .exe files don’t natively work on Linux (they require something like WINE, or Proton (WINE is like Proton, but not specialised for games)), but I don’t understand why. I think to properly understand it, I’d need to become a kernel developer or something silly, so I think I need to make my peace with not really understanding the difference. I think that’s okay though, because I don’t really need to know that. It’s sufficient to just know that they are different, and know how to respond (i.e. Knowing that the .exe version of software isn’t intended for my system, but that I can probably run it if I use WINE or Proton).

    Most of my teething problems with Linux have been non game related, and although some of them were very stressful to troubleshoot, I found it refreshing how easy it was to learn how to fix problems. Especially given that a big thing that drove me away from Windows was constantly feeling like my computer wasn’t my own. Often when Windows goes wrong, it makes fixing the problem harder via hiding away settings, or obscuring information in a way that perversely makes solving small things require a much higher level of expertise. It ends up feeling like the system isn’t trusting me to be able to solve problems for myself, which makes me feel powerless. I suspect you may relate to much of what I have said in this paragraph.

    Coming to Linux from Windows can be stressful because suddenly, you are trusted with a lot more power. You can delete your entire operating system with one command if you want (sudo rm -rf /* , if you’re curious) and there’s nothing stopping you. The lack of guardrails can be scary, but there are far more helpful and kind Linux nerds on the internet than assholes, in my experience, so I have found many guides that guide me through solving problems such that I’m not just blindly entering commands and praying to the computer god. You sound like a person with a mindset towards progression, so you will likely do well with this challenge. If you’re like me, you may relish the learning. Certainly I enjoy the feeling of progression that I’ve had the last year or so.

    People here may suggest dualbooting or using a virtual box to try it out. I would suggest diving in, if you can. Unless you have software that you know is strictly windows only, setting aside some time to fully switch is a good way to immerse yourself. I tried with virtual machines and dual booting, but I ended up getting lazy and just using the Windows because it was the path of least resistance. I had to fully switch to actually force myself to start becoming familiar with Linux.

    Hardly any of this directly answers your question, so I apologise if this is unwelcome; I wrote so much because I am more enthusiastic about this than the tasks I am currently procrastinating. Best of luck to you


    Edit: some games have anticheat software that can cause issues. I play some multiplayer games with anticheat stuff and I’ve not had any problems, but I think I am fortunate to not play any with the kind of anticheat that gets its hooks in deep — they may be the rare exceptions to gaming being refreshingly straightforward. I didn’t consider them because they don’t affect me, but others have mentioned them and may have more to say.





  • Congrats! I appreciate this post because I want to be where you are in the not too distant future.

    Contributing to Open Source can feel overwhelming, especially if working outside of one’s primary field. Personally, I’m a scientist who got interested in open source via my academic interest in open science (such as the FAIR principles for scientific data management and stewardship, which are that data should be Findable, Accessible, Interoperable and Reusable). This got me interested in how scientists share code, which led me to the horrifying realisation that I was a better programmer than many of my peers (and I was mediocre)

    Studying open source has been useful for seeing how big projects are managed, and I have been meaning to find a way to contribute (because as you show, programming skills aren’t the only way to do that). It’s cool to see posts like yours because it kicks my ass into gear a little.




  • The data are stored, so it’s not a live-feed problem. It is an inordinate amount of data that’s stored though. I don’t actually understand this well enough to explain it well, so I’m going to quote from a book [1]. Apologies for wall of text.

    “Serial femtosecond crystallography [(SFX)] experiments produce mountains of data that require [Free Electron Laser (FEL)] facilities to provide many petabytes of storage space and large compute clusters for timely processing of user data. The route to reach the summit of the data mountain requires peak finding, indexing, integration, refinement, and phasing.” […]

    "The main reason for [steep increase in data volumes] is simple statistics. Systematic rotation of a single crystal allows all the Bragg peaks, required for structure determination, to be swept through and recorded. Serial collection is a rather inefficient way of measuring all these Bragg peak intensities because each snapshot is from a randomly oriented crystal, and there are no systematic relationships between successive crystal orientations. […]

    Consider a game of picking a card from a deck of all 52 cards until all the cards in the deck have been seen. The rotation method could be considered as analogous to picking a card from the top of the deck, looking at it and then throwing it away before picking the next, i.e., sampling without replacement. In this analogy, the faces of the cards represent crystal orientations or Bragg reflections. Only 52 turns are required to see all the cards in this case. Serial collection is akin to randomly picking a card and then putting the card back in the deck before choosing the next card, i.e., sampling with replacement (Fig. 7.1 bottom). How many cards are needed to be drawn before all 52 have been seen? Intuitively, we can see that there is no guarantee that all cards will ever be observed. However, statistically speaking, the expected number of turns to complete the task, c, is given by: where n is the total number of cards. For large n, c converges to n*log(n). That is, for n = 52, it can reasonably be expected that all 52 cards will be observed only after about 236 turns! The problem is further exacerbated because a fraction of the images obtained in an SFX experiment will be blank because the X-ray pulse did not hit a crystal. This fraction varies depending on the sample preparation and delivery methods (see Chaps. 3–5), but is often higher than 60%. The random orientation of crystals and the random picking of this orientation on every measurement represent the primary reasons why SFX data volumes are inherently larger than rotation series data.

    The second reason why SFX data volumes are so high is the high variability of many experimental parameters. [There is some randomness in the X-ray pulses themselves]. There may also be a wide variability in the crystals: their size, shape, crystalline order, and even their crystal structure. In effect, each frame in an SFX experiment is from a completely separate experiment to the others."

    The Realities of Experimental Data” "The aim of hit finding in SFX is to determine whether the snapshot contains Bragg spots or not. All the later processing stages are based on Bragg spots, and so frames which do not contain any of them are useless, at least as far as crystallographic data processing is concerned. Conceptually, hit finding seems trivial. However, in practice it can be challenging.

    “In an ideal case shown in Fig. 7.5a, the peaks are intense and there is no background noise. In this case, even a simple thresholding algorithm can locate the peaks. Unfortunately, real life is not so simple”

    It’s very cool, I wish I knew more about this. A figure I found for approximate data rate is 5GB/s per instrument. I think that’s for the European XFELS.

    Citation: [1]: Yoon, C.H., White, T.A. (2018). Climbing the Data Mountain: Processing of SFX Data. In: Boutet, S., Fromme, P., Hunter, M. (eds) X-ray Free Electron Lasers. Springer, Cham. https://doi.org/10.1007/978-3-030-00551-1_7



  • He doesn’t directly control anything with C++ — it’s just the data processing. The gist of X-ray Crystallography is that we can shoot some X-rays at a crystallised protein, that will scatter the X-rays due to diffraction, then we can take the diffraction pattern formed and do some mathemagic to figure out the electron density of the crystallised protein and from there, work out the protein’s structure

    C++ helps with the mathemagic part of that, especially because by “high throughput”, I mean that the research facility has a particle accelerator that’s over 1km long, which cost multiple billions because it can shoot super bright X-rays at a rate of up to 27,000 per second. It’s the kind of place that’s used by many research groups, and you have to apply for “beam time”. The sample is piped in front of the beam and the result is thousands of diffraction patterns that need to be matched to particular crystals. That’s where the challenge comes in.

    I am probably explaining this badly because it’s pretty cutting edge stuff that’s adjacent to what I know, but I know some of the software used is called CrystFEL. My understanding is that learning C++ was necessary for extending or modifying existing software tools, and for troubleshooting anomalous results.