I’ll post some links, but it’s a pretty busy week for me already, so give me some time.
I’ll post some links, but it’s a pretty busy week for me already, so give me some time.
An interrupt is an input that can be triggered to interrupt normal execution. It is used for e. g. hardware devices to signal the processor something has happened that requires timely processing, so that real-time behavior can be achieved (for variable definitions of real-time). Interrupts can also be triggered by software, and this explanation is a gross oversimplification, but that information is what is most likely relevant and interesting for your case at this point.
The commands you posted will sort the interrupts and output the one with the highest count (via head -1), thereby determining the interrupt that gets triggered the most. It will then disable that interrupt via the user-space interface to the ACPI interrupts.
One of the goals of ACPI is to provide a kind of general hardware abstraction without knowing the particular details about each and every hardware device. This is facilitated by offering (among other things), general purpose interrupts - GPEs. One of these GPEs is being triggered a lot, and the processing of that interrupt is what causes your CPU spikes.
The changes you made will not persist after a reboot.
Since this is handled by kworker, you could try and investigate further via the workqueue tools: https://github.com/torvalds/linux/tree/master/tools/workqueue
In general, Linux will detect if excessive GPEs are generated (look for the term “GPE storm” in your kernel log) and stop handling the interrupts by switching to polling. If that happens, or if the interrupts are manually disabled, the system might not react to certain events in a timely manner. What that means for each particular case depends on what the interrupts are being responsible for - hard to tell without additional details.
I shudder to think OP’s post was written by an actual person…
With type annotations, this problem is mostly alleviated in practice, while still keeping the productivity gains of duck typing.
I am on my 4th personal TUXEDO laptop, never had any issues. I actually started giving them to the devs at my company, no complaints so far.
They don’t offer my choice of OS, and I wouldn’t use a preinstalled OS anyway, so I can’t comment on that.
Generally all correct, here is a resource with a lot of in-depth information and additional links:
https://batteryuniversity.com/article/bu-808-how-to-prolong-lithium-based-batteries
It’s not true that precision measurements are impossible with low value resistors, a lot of measurement equipment works exactly like that - it might just be more expensive than what the manufacturer is willing to budget for.
Tuxedo also offers products with an aluminum body, and while they do import the hardware from China, you get the local service and warranty guarantees any company in the EU must provide, so that’s fine by me.
Also, honest question: what do you think a unique laptop is, in particular when buying from a mass consumer brand like Lenovo? I really can’t figure out what that’s supposed to mean.
Oh, that makes everything a lot easier. The majority of the relevant settings will be in your home folder then, i. e. in the ${HOME}./.config folder, while some might also be in ${HOME}/.local/share etc.
You probably want to backup the whole home folder anyway, so that would pickup most of your settings. In order to make that work on a different system, you would have to install all applications you were using on the tablet as well. Luckily, software installation in Linux is pretty easy, so you can export a list of installed applications from the Surface and then re-install them on your target system before migrating your home folder. The software list should become part of your backup. See e. g. https://unix.stackexchange.com/questions/82880/how-to-replicate-installed-package-selection-from-one-fedora-instance-to-another for an idea of how to perform this.
I have used this approach in the past and it will get you 95% there. There might be some global system settings that you’d like to also transfer to your new system, but you can add those as you discover you miss them on the target system.
In general, no, this won’t work. In your case, you’re lucky since at least the Surface Go is using an x86 CPU, so it’s not completely out of the question, but transferring the image as-is to a completely different device typically does not work without modification.
Simple example: your target device might not refer to existing hardware (let’s say a storage medium) in the same manner as your old device, so the existing references in your cloned image won’t work. There are other issues of course, e. g. missing drivers for different hardware present on the target device.
It’s possible to modify the image so it would boot, but given the Surface runs Windows, that’s going to be a chore. I’d consider this an interesting project if bored on a slow weekend, but I’d most likely just do a filesystem backup of relevant data and call it a day.
Honestly, that just seems like you’re treating dd as some kind of arcanum. dd works just fine and I’ve been doing 1:1, full system backups for decades with it, no issues. Honorary mention for ddrescue / dd_rescue for recovery options, i. e. re-trying bad sector reads etc.
In fact, when Clonezilla doesn’t know your filesystem, it will simply employ dd to copy the data sector by sector.
I’d argue that Clonezilla (due to its use of partclone) is actually a less complete form of backup, since it will only copy used blocks, you don’t really end up with a clone of your devices, just a copy of what partclone believes to be your data. Don’t get me wrong, that is fine in most use cases, but there are some cases where this doesn’t cut it, e. g. wanting to backup / restore a storage device from a PLC where the vendor had the glorious idea to store licensing data in unused sectors, or when you want to create a forensic disk image, might want it look into d3dd then, although it absolutely works using regular old dd as well, d3dd just adds some amenities.
All I want to say is: dd is an absolutely reliable tool and can be a one stop solution for device backups. Also, I have absolutely no quarrels with Clonezilla, if it fits what you’re trying to do and it works, great.
The best way would in fact be testing it with an electronic load that applies a precise and well known load to the battery and integrates capacity until a matching shutoff condition is reached.
However, the majority of people do not happen to have access to such an instrument, so I’d say your suggestion is a close approximation of the best way, which could be augmented by adding simple measurements, which can be done by most people at home for a reasonable, quantifiable judgment.
Just for the sake of completeness:
https://github.com/BurntSushi/ripgrep
https://github.com/ggreer/the_silver_searcher
It’s useful to be able to do this without additional tools (and there are more applications for the general command setup discussed in the video), but in practice, ease of use and performance often make a difference.
I am aware of what you are saying, however, I do not agree with your conclusions. Just for the sake of providing context for our discussion, I wrote plenty of code in statically typed languages, starting in a professional capacity some 33 years ago when switching from pure TASM to AT&T C++ 2, so there is no need to convince me of the benefits :)
That being said, I think we’re talking about different use cases here. When I’m talking configuration, I’m talking runtime settings provided by a customer, or service tech in the field - that hardly maps to a compiler error as you mentioned. It’s also better (more flexible / higher abstraction) than simply checking a JSON schema, and I’m personally encountering multiple new, custom JSON documents every week where it has proven to be a real timesaver.
I also do not believe that all data validation can be boiled down to simple type checking - libraries like pydantic handle complex validation cases with interdependencies between attributes, initialization order, and fields that need to be checked by a finite automaton, regex or even custom code. Sure, you can graft that on after the fact, but what the library does is provide a standardized way of handling these cases with (IMHO) minimal clutter. I know you basically made that point, but the example you gave is oversimplified - at least in what I do, I rarely encounter data that can be properly validated by simple type checking. If business logic and domain knowledge has to be part of the validation, I can save a ton of boilerplate code by writing my validations using pydantic.
Type annotations are a completely orthogonal case and I’ll be the first to admit that Python’s type situation is not ideal.
I’m not talking about type checking, I’m talking about data validation using pydantic. I just consider mypy / pyright etc. another linting step, that’s not even remotely interesting.
In an environment where a lot of data is being exchanged by various sources, it really has become quite valuable. Give it a try if you haven’t.
I wholeheartedly agree. The ability to describe (in code) and validate all data, from config files to each and every message being exchanged is invaluable.
I’m actively looking for alternatives in other languages now.
Your USB ethernet adapter is down according to this output.
In case Ubuntu server comes with e. g. dhclient installed, you should be able to get a working network connection by ensuring a cable is properly plugged into your USB ethernet adapter and running
sudo dhclient -v enx949aa9857457
You might want to post the output of that command here. Alternatively, configure the USB adapter using one of the management tools mentioned in this thread already.
As a general rule, maybe don’t use shorthand terms you invented in posts that are supposed to provide information to the people trying to help you, just so you don’t confuse them any further.
I started working with CUDA at version 3 (so maybe around 2010?) and it was definitely more than rough around the edges at that time. Nah, honestly, it was a nightmare - I discovered bugs and deviations from the documented behavior on a daily basis. That kept up for a few releases, although I’ll mention that NVIDIA was/is really motivated to push CUDA for general purpose computing and thus the support was top notch - still was in no way pleasant to work with.
That being said, our previous implementation was using OpenGL and did in fact produce computational results as a byproduct of rendering noise on a lab screen, so there’s that.
Have you considered creating a macro in any image editor that supports macros and assigning that to a button / keyboard shortcut?
gimp certainly has macros and scripting features. Maybe this will help: https://www.gimp.org/tutorials/Automate_Editing_in_GIMP/
You can still edit a mask / selection with the regular UI, then trigger the cut/merge process you desire based on that selection.