You are viewing a single comment's thread from:

RE: An open-ended question to @ned and @dan

in #steemit7 years ago

Yup. The architecture is not capable of dealing with this much data at the advertised speeds that @dan has been so specifically advertising about his 'fancy' graphene.

I hate to break it to @dan, but there is no way you can use C++ and Boost to make a fast database system. That would be C/Assembler and GLib or homegrown libraries.

Sort:  

Question:
I really think to make my old blog alive more and more, because here posts are lost if they are not viewed in 1-3 days.
Also my posts can't find them easily if i want to check them.
Can i gather information you posted here and quote them there in an article?

I would love to keep track of such good informations and that's the only good way i see. To other people can reach it also, and not be lost here.

Or better, if you can or have a full article with that, would love that more.
Waiting for your answer. Tks

C++ can be quite fast but of course Assembler and C are slightly faster. Enough to make a difference though? I'm not certain.

Code that runs fast is quite distinct from code that gets the best out of hardware. C++ is loathed by the most famous kernel coder because C++ tends to lead, especially inexperienced coders, to mistake abstractions for the physical. Abstractions are for convenience, not performance.

My main issue with C++ is it's excessively complex. My most recent study of CS was from a russian friend I met while living on the street in Amsterdam, and his obsession was functional proogramming. I am resolutely a functional programmer now, but I believe that some algorithms are better done procedurally, or using objects, but no matter what other paradigm you work with, in my opinion, if you disrespect the functional rule about messing with the data inside functions, from other functions, you make your code unmaintainable.

C++ tries to encourage programmers to make data private within objects, but it tends to lead to functions within the methods of an object, that are too broadly defined, and lead to 'dirty functions'. The object really should be broken into several distinct sub-objects but the programmer's model mingles them into one, and the bigger these 'objects' get, the more likely they are to become impossible to re-engineer once a model proves to be wrong, because 5 different methods interact with the same variables, creating entropy in the algorithm.

Entropy is death for distributed systems.