Just a regular Joe.

  • 0 Posts
  • 33 Comments
Joined 2 years ago
cake
Cake day: July 7th, 2023

help-circle



  • Another technique that helps is to limit the amount of information shared with clients to need to know info. This can be computationally intensive server-side and hard to get right … but it can help in many cases. There are evolving techniques to do this.

    In FPS games, there can also be streaming input validation. eg. Accurate fire requires the right sequence of events and/or is used for cheat detection. At the point where cheats have to emulate human behaviour, with human-like reaction times, the value of cheating drops.

    That’s the advanced stuff. Many games don’t even check whether people are running around out of bounds, flying through the air etc. Known bugs and map exploits don’t get fixed for years.


  • ALSA is lowest level, and is the kernel interface to audio hardware. Pipewire provides a userspace service to share limited hardware.

    Try setting “export PIPEWIRE_LATENCY=2048/48000” before running an audio producing application (from the same shell).

    Distortion can sometimes be related to the audio buffers not getting filled in time, so increasing the buffering as above gives it more time to even out. You can try 1024 instead of 2048 too.

    There is no doubt a way to set it globally, if it helps.

    Good luck!


  • And contributions to codebases that have developed with the goal of meeting the team’s own needs, and who similarly don’t have the time or space to refactor or review as needed to enable effective contributions.

    Enabling Innersource will be a priority for management for only two weeks, anyway, before they focus on something else. And if it even makes it into measurable goals, it will probably be gamed so it doesn’t ruin bonuses.

    Do you also work for $GenericMultinationalCompany, per-chance? Do you also know $BillFromFinance?


  • It typically takes a small core team to build the framework/architecture that enables many others to contribute meaningfully.

    Most OSS projects get bugger all contributions from outside the initial core team, having limited ability to onboard people. The biggest and most active (out of necessity or by design) have a contribution friendly software architecture and process, and often deliberately organized communities (eg. K8S & CNCF) or major corporate sponsors filling the role.

    Free Software and resulting ecosystems seem to have a better chance of contributing to the common good over the long term. This is simply because most companies are beholden to their shareholders, and at some point the urge to squeeze every last cent out of an opportunity comes to the forefront, and many initially well intentioned efforts get poisoned.

    Free Software licenses like the GPL help to protect our freedom and to set open standards, and are essential for the core technology stack.

    When someone can get annoyed with some shitty software or its license-terms and reimplement the core functionality in a few days/weeks/months … eventually someone will get annoyed and create some decent free software that will kill off the shitty alternatives, or even just a better commercial alternative. This only works because of the open platforms & protocols.

    One of the major challenges for consumers is finding good software today in the grey goo of projects and appstores. This harks back to OP’s point about curated collections of software. It’s also where the various foundations add value (CNCF, Linux Foundation, Apache) … along with “awesome X” gitlab repos, which are far better than random youtube videos or ad-riddled blogs or magazine articles.


  • The true strength is in the open interfaces and common protocols that enable competition and choice, followed by the free-to-use libraries that establish a foundation upon which we can build and iterate. This helps us to stay in control of our hardware, our data, and our destiny.

    Practically speaking, there is often more value in releasing something as free software than there is to commercialising it or otherwise tightly controlling the source code… and for these smaller tools and libraries it is especially the case.

    Many bigger projects (eg. linux kernel, firefox, kubernetes, apache*) help set the direction of entire industries, building new opportunities as they go, thanks to the standardization that comes from their popularity.

    It’s also a reason why many companies release software as open source too, especially in the early days, establishing themselves as THE leader…for a while at least (eg. Docker Inc, Hashicorp).


  • wg-quick takes a different approach, using an ip rule to send all traffic (except its own) to a different routing table with only the wireguard interface. I topped it up with iptables rules to block everything except DNS and the wireguard udp port on the main interface. I also disabled ipv6 on the main interface, to avoid any non-RFC1918 addresses appearing in the (in my case) container at all.

    edit: you can also do ip rule matching based on uid, such that you could force all non-root users to use your custom route table.







  • You have an opportunity. Give him a pre-installed Linux and a terminal, along with a page of commands that he can run to do neat things… including starting the GUI to watch his favourite (ideally pre-downloaded) videos, running some demos, etc.

    Don’t make it too easy, but not too hard (2 you said? Can type a few characters though…)… Add to it over the years, unlocking the power, and guiding him to discover more by himself.

    Kids won’t become tech savvy if we hand everything to them on a silver platter, with touch screens, controllers, and flashy games. It can be bland and boring, until they do something.

    It might just be the most life changing gift they ever receive.





  • I have two apparmor profiles targeting shell scripts, which can run other programs. One is “audit” (permissive with logging) and the other is “safe” (enforcing).

    The safe profile still has a lot of read access, but not to any directories or files with secrets or private data. Write access is only to the paths and files it needs, and I regularly extend it.

    For a specific program that should have very restricted network access, I have some iptables (& ip6tables) rules that only apply to a particular gid, and I have a setgid wrapper script.

    Note: This is all better than nothing, but proper segregation would be better. Running things on separate PCs, VMs or even unpriviliged containers.


  • Temporal is MIT licensed and comes with multi-tenant security features and its durable execution model is solid and scalability is phenomenal. They upsell to the cloud offering and the default OSS auth plugin is intentionally limited (you might want to develop your own if you self-host). You’d probably only look at the Temporal UI when debugging.

    Windmill is very cool, but it is only suitable for trusted teams due to its security model. If you want to be able to develop scripts and workflows in the web browser and run them together with trusted colleagues, on a schedule etc., then windmill might just be for you!