You can install things from random websites for Linux too, though.
- 0 Posts
- 9 Comments
Zacryon@feddit.orgto Linux@lemmy.ml•Debian is Ditching X (Twitter) Citing These Reasons3·5 months agoAh I see. This is getting ridiculous.
Zacryon@feddit.orgto Linux@lemmy.ml•Debian is Ditching X (Twitter) Citing These Reasons2·5 months agoI wonder where you draw the line of when the use of resources for technology is okay and when it isn’t.
Zacryon@feddit.orgto Linux@lemmy.ml•Debian is Ditching X (Twitter) Citing These Reasons77·5 months agoDid it make a mistake?
Zacryon@feddit.orgto Linux@lemmy.ml•Debian is Ditching X (Twitter) Citing These Reasons3524·5 months agoThe reasons (summarized using Copilot):
- The platform no longer aligns with Debian’s values, social contract, code of conduct, and diversity statement.
- Concerns over X becoming a place where people they care about don’t feel safe.
- Abuse on the platform happening without consequences.
- Issues with misinformation and lack of moderation.
Zacryon@feddit.orgto Linux@lemmy.ml•Ghostty 1.0 Released, A New GPU-Accelerated Terminal Emulator68·6 months agoGPU rendered text interfaces are pretty ubiquitous already. You can find that in IDEs, browsers, apps and GUIs of OSs. Drawing pixels is still a job the GPU excels at. No matter whether it’s just text. So I don’t see a point why we shouldn’t apply that to terminal emulators as well.
Zacryon@feddit.orgto Linux@lemmy.ml•Microsoft’s latest security update has ruined dual-boot Windows and Linux PCs8·11 months agoThis again?
If we’re speaking of transformer models like ChatGPT, BERT or whatever: They don’t have memory at all.
The closest thing that resembles memory is the accepted length of the input sequence combined with the attention mechanism. (If left unmodified though, this will lead to a quadratic increase in computation time the longer that sequence becomes.) And since the attention weights are a learned property, it is in practise probable that earlier tokens of the input sequence get basically ignored the further they lie “in the past”, as they usually do not contribute much to the current context.
“In the past”: Transformers technically “see” the whole input sequence at once. But they are equipped with positional encoding which incorporates spatial and/or temporal ordering into the input sequence (e.g., position of words in a sentence). That way they can model sequential relationships as those found in natural language (sentences), videos, movement trajectories and other kinds of contextually coherent sequences.
What decisions for example?