To me, that makes it sound like you’re writing too much and too complex yaml files manually, and/or that you don’t have good enough CI to catch invalid configurations. Unless, of course, you have very few prod failures overall, and the few that happen are due to yaml indentation, which I still think is a bit weird, since an invalid config caused by incorrect indentation should ideally be caught at compile time (if you’re generating code from the yaml) or by some linter or something (if you’re using it for config).
- 1 Post
- 27 Comments
I’ll agree that significant whitespace can be a PITA (one of the reasons I prefer json over yaml), but at the same time I think improper or lacking indentation hurts readability even more than significant whitespace. Toml basically encourages a completely flat structure, where objects and sub-objects are defined all over the place. At that point, I much prefer an enforced structure with whitespace.
.vscodewould like a word.But besides that, I just can’t understand why even someone that hates
JSONwould chooseTOMLoverYAMLfor a config file.
I’ve never gotten to be good friends with
toml. I’ve never liked that the properties of some thing can be defined all over the place, and I’ve definitely never liked that it’s so hard to read nested properties.JSONis my friend.
That was really a fantastic read!
Similarly, what would you gain by saying uint32_t const* x = my_var.get<uint32_t>();
To be frank: You gain the information that
MyConcreteType::get<uint32_t>returns auint32_t, which I otherwise couldn’t infer from the docs. Of course, I could assume it, based on the template parameter, but I don’t want to go around assuming a bunch of stuff in order to read docs.Take an example like
auto x = my_var.to_reduced_form(), it’s very clear thatxis the “reduced form” ofmy_var, which could be meaningful in itself, but what type is it? I need to know that if I want to do anything withx. Can I dox += 1? If I do, will that modifymy_var? Let’s say I want to make avectorof whateverto_reduced_formreturns… and so on.All these questions are very easily answered by
MyConcreteType x = my_var.to_reduced_form(). Now I immediately know that everything I can do withmy_var, I can also do withx. This makes me happy, because I need to do less digging, and the code becomes clearer to read.
Thanks, that was a good read :)
However, my impression is that he’s largely using the existence of templates and polymorphism as arguments that “we don’t really care about type”. I disagree: A template is essentially a generic type description that says something about what types are acceptable. When working with something polymorphic, I’ll prefer
ParentClass&, to indicate what kind of interface I’m working with.Sure, it can be very useful to hide exact type information in order to generalise the code, but I think that’s a weak argument for hiding all type information by default, which is what
autodoes.
I really like C++ (I know, shoot me), and I think
autoshould be avoided at (almost) all costs.One of the things I love about a language like C++ is that I can take one glance at the code and immediately know what types I’m working with.
autotakes that away while adding almost no benefit outside of a little convenience while writing.If I’m working with some very big template type that I don’t want to write out, 99/100 times I’ll just have a
usingsomewhere to make it more concise. Hell, I’ll haveusing vectord = std::vector<double>if I’m using a lot of them, because I think it makes the code more readable. Just don’t throwautoat me.Of course, the worst thing ever (which I’ve seen far too often) is the use of
autoin examples in documentation. Fucking hell! I’m reading the docs because I don’t know the library well! When you first bother to write examples, at least let me know the return type without needing to dig through your source code!
100 % agree here. If you’re testing an actual use-case, it’s fair to compare realistic python to realistic C. However, I would argue that at that point you’re no longer benchmarking Python vs. C as languages, but Python vs. C for that particular use-case.
That completely depends on what you’re doing. If you’re doing tasks that python can completely offload to some highly optimised library written in C/C++/Fortran, then yes. However at that point you’re not really comparing Python to C anymore, but rather your C implementation to whatever library you used.
A fair comparison is to compare pure python to pure C, in which case you need to mess up the C-code pretty bad if Python is to stand a chance.
These two are not interchangeable or really even comparable though? Make is a program that generates non-source files from source files, cmake is a high-level tool to generate makefiles.
If you’re writing anything more than a completely trivial makefile I would heavily recommend learning cmake. It makes your build system much, much more robust, far easier to maintain, much more likely to work on other systems than your own, and far easier to integrate with other dependent projects.
My primary experience with plain make was when I re-wrote a 2000+ line make-system in a project I maintain with about 200 lines of cmake, because we were setting up some CI that required us to clone and build some dependencies, which was an absolutely PITA to handle cross-platform with plain make, but was trivial with cmake.
PS. The cmake docs suck for anyone that hasn’t used cmake for 10 years already.
thebestaquaman@lemmy.worldto
Programmer Humor@lemmy.ml•Does this exist anywhere outside of C++?
1·11 months agoI don’t mean to say that C++ is in any way without faults. If performance is crucial, that can definitely be a reason to forgo some of the guard-rails, and then you’re on your own.
I guess my issue with the “C++ is unsafe”-trope, is that it usually (in my experience) comes from people not having heard of all the guard-rails in the first place, or refusing to use them when appropriate. They write C++ as if they were writing C, and then complain that the language is unsafe when they’ve made a mistake that is easily avoided using stl-containers.
thebestaquaman@lemmy.worldto
Programmer Humor@lemmy.ml•Does this exist anywhere outside of C++?
1·11 months agoAs I said: There are tools in place in modern C++ that are designed to catch the errors you make. If you are using a raw pointer when you could have used a reference, or accessing an array without range checking, those are choices you’ve made. They may be valid choices in your use-case, but don’t go complaining that the language is “unsafe” when it gives you the option to code with guard rails and you choose to forgo them.
thebestaquaman@lemmy.worldto
Programmer Humor@lemmy.ml•Does this exist anywhere outside of C++?
11·11 months agoMemory unsafe C++ is a choice. With modern C++ you have no excuse for accessing raw pointers or arrays without range checking if memory safety is a priority.
I actually enjoy this part, where I’ve written some intricate code of sorts and get to spend some time writing a memo that explains how it works.
I usually don’t even end up reading them, because the process of writing a good memo will make me remember it.
Don’t worry, referring to yourself in third person leaves enough of an impression that what you answer probably doesn’t matter all that much.
I wholeheartedly agree: In my job, I develop mathematical models which are implemented in Fortran/C/C++, but all the models have a Python interface. In practice, we use Python as a “front end”. That is: when running the models to generate plots or tables, or whatever, that is done through Python, because plotting and file handling is quick and easy in Python.
I also do quite a bit of prototyping in Python, where I quickly want to throw something together to check if the general concept works.
We had one model that was actually implemented in Python, and it took less than a year before it was re-implemented in C++, because nobody other than the original dev could really use it or maintain it. It became painfully clear how much of a burden python can be once you have a code base over a certain size.
I have next to no experience with TypeScript, but want to make a case in defence of Python: Python does not pretend to have any kind of type safety, and more or less actively encourages duck typing.
Now, you can like or dislike duck typing, but for the kind of quick and dirty scripting or proof of concept prototyping that I think Python excels at, duck typing can help you get the job done much more efficiently.
In my opinion, it’s much more frustrating to work with a language that pretends to be type safe while not being so.
Because of this, I regularly turn off the type checking on my python linter, because it’s throwing warnings about “invalid types”, due to incomplete or outdated docs, when I know for a fact that the function in question works with whatever type I’m giving it. There is really no such thing as an “invalid type” in Python, because it’s a language that does not intend to be type-safe.

Oh, I definitely agree that meaningful whitespace can be a pain, and I’m not a very big fan in general (although I prefer meaningful whitespace like in Python to terribly indented code with regards to readability). I guess my point was just that if you’re having a lot of failures due to incorrect indentation, it sounds like a systemic issue somewhere. While meaningful indentation can be annoying, I think it sounds like symptom of something more if it’s responsible for most of your production failures.
I think the bottom line for me is that if a config file regularly causes errors because of incorrect indentation, it should probably be refactored, since it’s clearly not easy enough to read that those errors are caught.