I once had this kind of body recovery/stress level measuring thingy on me for a few days, and a doctor would then analyze my health and such. I was under some stress those days and (according to the measurements) I wasn't recovering properly even during the nights. But then there was this one, long, flat, deep green curve in the middle of my work day. I checked from my VCS what I was doing during that period: I was optimizing.
I've since noticed this many times. Optimizing is like meditation to me. It's very mechanical (measure), with a sprinkle of creative work (once you know what is slow, it's quite obvious how to make it faster, but just challenging enough to be engaging), and it has a very nice tight feedback loop: Something is slow. I make a change. Now it's fast. Next.
Optimizing is my happy place.
AaronAPU 2 hours ago [-]
I spent 10 years straight doing C++ and assembly optimization. My work is still fun these days but that was probably the most enjoyable work of my career in terms of the actual day to day coding.
Code cleanup in general is the same for me, but it’s really hard to justify putting much time into that when running your own company solo.
jasonthorsness 1 hours ago [-]
What tools did you use to assess the results of your changes?
AaronAPU 16 minutes ago [-]
The routines were individually benchmarked using some custom tools (iterate repeatedly and use statistical analysis to converge on an estimate). Always compared against a plain C reference implementation.
Then there was a system for benchmarking the software as a whole on a wide variety of architectures, including NUMA. With lots of plots and statistics.
Usually you’d eventually end up at a point where the improvements are below the noise floor or they help on some systems and cause regression on others. The rule was usually “no regressions”
VTune for multithreading optimization. Built a fibers and lockfree system for efficient scheduling.
I remember one fun time between jobs, that I stayed up till 4 or 5 am optimizing something. It always felt like I was making progress and about to beat the original implementation
Unfortunately I had to give up when I was still 10 times slower than the reference lol
senderista 3 hours ago [-]
Same here, last time I was between jobs I optimized my defunct startup's database from ~50K TPS to nearly 5M TPS (no durability, if you're wondering), and that was unbelievably rewarding.
optymizer 48 minutes ago [-]
I see you.
jmull 4 hours ago [-]
> I dislike the “intuition doesn’t work, profile your code” mantra because it seemingly says profiling is a viable replacement for theoretical calculations, which it isn’t.
This seems like a nonsensical statement to me. How could measuring be a substitute for thinking/analyzing/predicting/forming a plan?
Measuring/profiling just means observing the system you want to optimize in a systematic way. You certainly won't be very effective at optimizing anything if you don't observe it.
Theoretical calculations means you've formed a model of what's happening and you're devising a plan to optimize against that model. But of course, a useful model needs to represent the significant aspects of your system (and a good model should exclude most insignificant ones). Failing to observe your system means your model could be bad -- focused on insignificant aspects and missing significant ones -- and you'd never know.
purplesyringa 2 hours ago [-]
You're right, I could've phrased that better.
Profiling to find suboptimal code is perfectly fine. Then you need to figure out how to fix it. Many people don't understand how performance optimization works, so they blindly add caching, improve constant time by invoking more low-level methods, etc. This obviously doesn't work, yet intuitively (to those people, anyway) it should produce good results.
That's why the mantra exists: don't trust your intuition, don't believe it when it says these changes improve performance, instead measure that performance and only apply changes that work. This is also perfectly fine, but this is a double-edged sword, and I've seen people go too far in this direction.
For example, they refuse to do any changes that don't immediately improve performance according to the profiler. If they modify one computation and performance decreases, they abandon this path altogether. They treat optimization as a game with a dense fog of war, and they refuse to apply deductive reasoning and, of course, intuition to apply changes that, according to the profiler at least, are not immediately rewarding.
mnahkies 11 minutes ago [-]
I think there's a related problem where profiling/measurements can be made poorly and not reflect the real world.
Eg: indexing or partitioning a database table may appear to make things slower if you don't have both a representative amount of data and representative query patterns when you're measuring the change.
You should still measure your changes, but sometimes you need to be careful about measuring them in the right way, and possibly simulating a future context (eg: more scale) before drawing a conclusion.
Intuition about how the context will evolve and what effect that might have on the tradeoffs of different approaches is helpful
necovek 2 hours ago [-]
"Intuitively" literally means without having to learn something.
Adding caches or switching to lower-level calls is definitely something learned, and I wouldn't call it "intuitive".
What I think you are referring to is that sometimes, simply reading and understanding the code can tell you where the problem really is — still, my experience is that you want to measure the before and after to at least identify the general area you should be looking at, and more often than not, I could figure out what needs optimizing and how without having to get detailed profiles.
I did end up being surprised a few times, but it was mostly due to external code like buggy library implementations that didn't do the things they promised (eg. async library really synchronizing everything with a badly placed mutex).
At the same time, it's wrong to focus on a single case or a single profile (executing one code path in one set of circumstances), but what you are describing simply sounds like bad engineering — the fact that you can misuse the tool does not make the tool bad, it's still the engineer wielding it who's at fault.
scratcheee 8 minutes ago [-]
And yet their statement makes perfect sense to me.
Caching and lower level calls are generic solutions that work everywhere, but are also generally the last and worst way to optimise (thus why they need such careful analysis since they so often have the opposite effect).
Better is to optimise the algorithms, where actual profiling is a lesser factor. Not a zero factor of course, as a rule of thumb it’s probably still wise to test your improvements, but if you manage to delete an n^2 loop then you really don’t need a profiler to tell you that you’ve made things better.
Capricorn2481 2 hours ago [-]
Sounds like a tricky balancing act. There are things that are extremely difficult to "game out." CPUs are very complicated. There are optimizations that seem like they could be cache friendly in theory, but aren't in practice.
jandrewrogers 3 hours ago [-]
I think many people have seen both sides of this in practice. I’ve seen engineers follow the profiler into a dead-end because they see nothing that stands out in the profiler or they don’t grok the essential nature of the code or system they are profiling. I’ve seen engineers with deep domain expertise consistently make accurate estimates of how a code changes will impact performance without ever using a profiler because their mental model of the code execution maps to reality with high fidelity.
Profilers fill in gaps in our mental model of code execution but they are not a substitute for it. Computer behavior is largely knowable from first principles, albeit requiring a considerable degree of expertise and detail. For some people in some contexts, the profiler adds little information to what they already understand. I know a few people that do almost all of their optimization work using pencil and paper with great effectiveness and precision. Not something I would recommend for most software engineers but not unreasonable at the limit either.
infogulch 2 hours ago [-]
Optimize according to your mental model; profile to check that your mental model matches reality.
If you optimize infrequently, or haven't profiled code like this recently, or haven't profiled this specific codebase recently, then your mental model is probably due a refresh.
kaptainscarlet 2 hours ago [-]
A lot of the optimisation I do largely consists of a series of highly educated guesses and the resulting fixes are right in 99% percent of the cases.
hinkley 3 hours ago [-]
I think the confusion sneaks in because “measuring” is something you do several times and each iteration has different connotation.
Once you “find” a candidate change you measure it to see if what you did made things worse and you put it back if it did, or maybe you try it in combination with other changes to see if its value is complementary.
But people fuck up all the time reading the initial telemetry, which is often where I come in. I get tired of hearing people say, “we’ve done all we can, look how flat this chart is,” and hand someone my beer. You won’t find all of the good candidates in the percentage of run time list. That’s not all the data that’s there, and not every change that works needs to even be supported by the initial data. It only needs to be supported by the delta afterward.
titzer 4 hours ago [-]
No amount of measuring and squeezing--not even years of it--is a subsitute for high-level thinking. And vice versa.
Imagine:
function F() {
for (i = 0; i < 10; i++) {
A();
B();
C();
}
}
If we profile this code, we might find out, e.g. B takes the majority of the time--let's say 90%. So you spend hours, days, weeks, making B 2X faster. Great. Now you removed 45% of execution time. But the loop in the outer function F is just a few instructions, it is not "hot"--it won't show up in profiles except for ones that capture stacks.
If you're just stuck in the weeds optimizing hot functions that show up in profiles, it's possible to completely overlook F. That loop might be completely redundant, causing 10X the workload by repeatedly computing A, B, and C, which may don't need to be recomputed.
There are bazillions of examples like this. Say you find out that a function is super, super hot. But it's just a simple function. There are calls to it all over the code. You can't make it any faster. Instead you need to figure out how to not call it at all, e.g. by caching or rethinking the whole algorithm.
> How could measuring be a substitute for thinking/analyzing/predicting/forming a plan?
This happens more than you think. Understanding how the system works in enough detail and also at a high level to formulate a plan is in short supply. Jumping in and hacking in things, like a cache or something, is surprisingly common.
hinkley 3 hours ago [-]
Small functions need special attention not just because they show up as leaf nodes everywhere but also because they are difficult for profilers to account properly. You get two functions listed as each taking 4% of CPU time, and one could easily be taking up twice as much compute as the other. The sort of memory pressure that small functions can generate can end up scapegoating a big function that uses a large fraction of memory because it gets stuck with cold cache or lots of GC pressure from the piddly functions fouling the nest.
One of my best examples of this, I had a function reported as still taking 10% of cumulative run time, after I’d tweaked it as much as I could. But I’d set up a benchmark that called a code path a deterministic number of times and this function was getting called twice as much as it should. I found two sibling methods asking the same question and rearranged them so the answer came as an argument and nixed the duplicate call. I reran the benchmark and instead of getting a reduction of 5% (10/2), I got 20%. That was all memory pressure.
The worst memory pressure I ever fixed I saw a 10x improvement by removing one duplicate call. Now, there was a quadratic part of that call but it was a small enough n that I expected 3x and hoped for 4x, and was as shocked as anyone when it went from 30s to 3s with one refactor.
dmurray 4 hours ago [-]
> it won't show up in profiles except for ones that capture stacks
I don't think I've ever used a profiler that couldn't report you were in F() here. One that only captures your innermost functions really doesn't seem that useful, for exactly the reasons you give.
AtNightWeCode 3 hours ago [-]
Agree with this. But not what I concluded from OP. Architectural decisions from the start is where most optimizations should happen. I remember from school some kids that did this super optimized loop and the teacher said. Do you really have to do that same calculation on every iteration?
But, in the real world. Code bases are massive. And it is hard to predict when worlds collide. Most things does not matter until they do. So measuring is the way to go I believe.
hinkley 2 hours ago [-]
Measuring is also useless once someone has introduced bottom up caching.
There’s so much noise at that point that even people who would usually catch problems start to miss them.
There’s usual response to this is, “well you can turn caching off to do profiling” but that’s incorrect because once people know they can get a value from the cache they stop passing it on the stack. So your function that calls A() three times that should have called it 2? You find now that it’s being called ten times.
And the usual response to that is, “well it’s free now so who cares?” Except it’s not free. Every cache miss now either costs you multiple, or much more complex cache bookkeeping which is more overhead, and every hit resets the MRU data on that entry making it more likely that other elements get evicted.
For instance in NodeJS concurrent fetches for the same resource often go into a promise cache, but now the context of the closure for the promise is captured in the cache, and it doesn’t take much to confuse v8 into keeping a bunch of data in scope that’s not actually reachable. I’ve had to fix that a few times. Hundreds of megabytes in one case because it kept an entire request handler in scope.
klysm 4 hours ago [-]
I think what he’s saying here is you can’t skip the basic math step to arrive at good performance. Staring at profiling results will lead you to a local minima
4 hours ago [-]
cogman10 4 hours ago [-]
Profiling doesn't mean you don't do the math. Profiling is simply there to show you that "hey, this is where the problems actually are".
You do profiling because it's WAY too easy to get obsessed about theoretical problems when a simple measurement will show you the actual problems.
You do the math on the actual problem location, not a method with O(n!) which only gets called with n=3.
You still have to look at the entire call stack when profiling (which means thinking about the overarching algorithm).
9rx 4 hours ago [-]
Well, he's saying that intuition does work... But does it really?
If a problem area is so intuitively obvious, why would you introduce the problem in the first place? In reality, performance optimizations are usually needed where you least expect them. Which means that you can't get there intuitively. Hence, the suggestion of using profiling to help track down where the problem is instead.
mjmahone17 3 hours ago [-]
Most code I’ve seen is written in an environment where people can’t afford to “intuit”: thinking through the repercussions of specific language or algorithm choices comes second to solving a problem “well enough”. Building an intuition takes learning how you’ve built something poorly, and for a specific problem can take hours to gain context and walk yourself through many paths.
Bring in a performance expert and their intuition can quickly identify many performance improvements. The tough part for the business is when and where do you pay for the experience of performant code, and when is good enough to leave alone?
mystified5016 3 hours ago [-]
Do you want to claim you've never written quick and ugly code to get something working to come back and fix it up later?
Pretty much everyone I know will throw down an O(n^2) algorithm or whatever in their first pass and replace it with something more thought out once they have the time to think deeply about it.
If you're fretting about optimization at every stage of development, you're really doing it wrong. This is precisely the type of early optimization that everyone and their dog will tell you is bad. You should focus first on making your program work and laying out the logic and data flows. Optimization does not matter until the program is almost finished.
9rx 2 hours ago [-]
Purposefully introducing underperforming code as a tradeoff to meet some other goal (faster delivery, for example) is something else entirely. It doesn't require intuition or profiling – at most requiring only memory of the choice. A fun aside, but not related to what we're talking about.
2 hours ago [-]
2 hours ago [-]
tough 3 hours ago [-]
One could even say optimization does not matter until the program is already running.
patrick451 2 hours ago [-]
This only works if you don't need real time performance. For anything that does need to run in realtime, you have to be damn sure that whatever your initial approach is can be optimized to run at the required sample rate. Otherwise, you will be ripping out the entire design and starting over. You might as well optimize first.
absolutelastone 4 hours ago [-]
I think the person you are responding to is criticizing the original "mantra" quote not the author's criticism of it(?)
bqmjjx0kac 3 hours ago [-]
FYI, the author is a woman.
jmull 4 hours ago [-]
Yeah, that's why I think it's nonsense.
Measuring doesn't mean don't think. Measuring and thinking are two different things. You need to do them both to optimize effectively.
Joel_Mckay 4 hours ago [-]
Or optimizing a scheduling noop loop dozens of times given local encapsulation obscures global functional context.
The fact remains most projects that do small trivial modular prototypes first will ultimately know which paths are viable before painting themselves into a corner algorithmically.
Best of luck =3
3 hours ago [-]
fabian2k 4 hours ago [-]
While profiling and measuring is very important if you want to optimize performance, there are a lot of things you can do without any profiling. In many situations the consequences of each approach are well known or easily reasoned about. Most of the time it's simply "do less work" = "more performance", or avoiding obvious and well-known patterns like N+1 database queries.
But it's also very easily to mislead yourself that way, many "optimizations" might do much less than you think. So you should avoid implementing more complex or harder to understand code just because you think it is faster, but otherwise I'd certainly try to write faster code by default in areas I know well enough to judge that.
tmoertel 3 hours ago [-]
> > I dislike the “intuition doesn’t work, profile your code” mantra because it seemingly says profiling is a viable replacement for theoretical calculations, which it isn’t.
> This seems like a nonsensical statement to me. How could measuring be a substitute for thinking/analyzing/predicting/forming a plan?
I believe the mantra “intuition doesn’t work, profile your code” is understood to mean “don't rely on your intuition alone; profile your work.” It's a warning that when it comes to performance optimization, intuition is often unreliable, so you need to supplement your mental model with actual data if you want to avoid big mistakes.
I don't know why the author of the blog post is interpreting the mantra literally, as if it claims intuition is obsoleted by profiling.
kccqzy 4 hours ago [-]
I think theoretical calculations could mean a detailed calculation of the number of times a certain function or operation is called. We all know from tech interviews that there are big O time complexity, but this is usually very hand-wavy and not precise enough. You can usually come up with a more precise formula though it can get messy with recurrences if your algorithm involves recursion. You probably need a computer algebra system. Spending an afternoon doing these symbolic calculations might give you better intuition of why the profiling produced such a measurement.
AtNightWeCode 3 hours ago [-]
Agree. A classic example is compilers that let you choose between optimizing for speed or binary size. But a smaller sized binary is sometimes faster.
tough 3 hours ago [-]
Why not both?
touisteur 3 hours ago [-]
One way to improve performance is to unroll loops and inline code. Unfortunately this increases code size and puts pressure on the instruction cache, making a program sometimes slower. It's probably a lot harder to balance these out in the compiler than to just... sometimes try.
AtNightWeCode 3 hours ago [-]
Impossible. Speed option may do things like loop unrolling, function inlining and today even way more complicated things than that and therefor creates larger binaries.
ttd 5 hours ago [-]
Good article that I agree with mostly. One interesting note is this:
> There is no way to provide both optimized assembly and equivalent C code and let the compiler use the former in the general case and the latter in special cases.
This is true, but can be seen as a failure of language and tooling. For example, Halide [1] pioneered (AFAIK) the concept of separating algorithm from implementation at the language level. This separation lets you express the algorithm once, and then "schedule" it by specifying parallelism, vectorization, etc. You can provide multiple schedules for one algorithm, which allows you to specialize / make different choices depending on varying factors.
It's a really interesting concept, though maybe limited in practice to DSLs. I'm not sure a general purpose language would be a good fit for this model, but then, for general purpose programs written in general purpose languages, perf optimization at the level TFA discusses is frequently limited to just specific hot sections. Those hot sections could be extracted out into specialized components written in such a DSL.
> Halide [1] pioneered (AFAIK) the concept of separating algorithm from implementation at the language level.
you don't need to go all the way to Halide to do what the article is claiming isn't possible - you can do it just by including a "micro-kernel" in your library and have the code branch to that impl (depending on something at runtime) instead of whatever the C code compiled down to. this is done every single day in every single GPU lib (famously cublas ships with hundreds/thousands of these of such ukernels for gemms depending on shapes).
purplesyringa 2 hours ago [-]
I was going for something different: I don't want to choose a different implementation in runtime, I want the compiler to see through my code and apply constant propagation -- not just for constant inputs, but inputs with known properties, like `n < 1000` or `(n & 7) == 0`. I want it to also learn facts about the output values, e.g. that my `isqrt(n) -> m` function always returns `m` such that `m^2 <= n`. None of this is possible with runtime selection because runtime selection was never the point.
ttd 4 hours ago [-]
Ah ok, I see what you mean (and likely sibling comment too w.r.t. gcc feature). Yes that is a fair point - though still has the substantial downfall of maintaining many different implementation of any given algorithm.
This is useful but not equivalent. Using this type of tooling you still have to write the algorithm itself in N versions. Changing the algorithm then requires changing all N implementations. This contrasts with the Halide approach where the algorithm is written once, and then schedules can be modified without worrying that you are changing the algorithm itself.
hansvm 5 hours ago [-]
> too many cases to analyze
One of my early-career successes was just creating a framework for generating every permutation of perf optimizations for every (log-scaled -- clz is very fast) input size and checking which was best, dropping the results into a lookup table of function pointers to branch on. The university had a large supply of heterogeneous computers, replete with all the normal problems like being able to double floating-point addition throughput on Haswell CPUs by abusing the fmadd instruction, so I made a framework (probably closer to a DSL) for encoding your algorithms in a way that you could analyze perf tradeoffs at compile time and tune your result for the given computer. It's kind of like what ATLAS does for some linear algebra tasks.
Such practices are almost never optimal, but they're pretty easy to implement, and the results are near-optimal for almost all inputs. In the tradeoff between human and computer performance, I think it's a nice option.
willvarfar 6 hours ago [-]
I think it is worth making a distinction between "micro" (what the blogpost is about) and "macro", or "tactical" and "strategic", optimisations.
Strategic optimisations is often basically free if you have domain expertise. It's that easy to know that the business wants x outcome and algorithm y is the right choice etc if its all internal thought processes. Whereas if you don't know enough then you're likely to make very expensive to undo decisions.
jandrewrogers 5 hours ago [-]
I often refer to those as architectural optimizations. Even some of these tend to sensitive to the details of the operating environment.
karmakaze 3 hours ago [-]
I agree with many of the comments here including some of the ones that are in disagreement. The difference is context where for most situations the starting point is far from optimal and the any of N better choices is a good improvement.
That doesn't seem to be what this post it talking about. It seems to talking about well worn areas trying to improve the state of the art. An example that illustrates it for me is DeepMind's AlphaTensor finding a better way to multiply matrices[0] in 2022. It wasn't a brute-force solution, but the scale of it makes it appear so.
> On 4x4 matrices, AlphaTensor unexpectedly discovered a solution with 47 multiplication steps, an improvement over the 49 required with Strassen’s algorithm of 1969, albeit restricted to mod 2 arithmetic. Similarly, AlphaTensor solved 5x5 matrices with 96 rather than Strassen's 98 steps. Based on the surprising discovery that such improvements exist, other researchers were quickly able to find a similar independent 4x4 algorithm, and separately tweaked Deepmind's 96-step 5x5 algorithm down to 95 steps in mod 2 arithmetic and to 97[24] in normal arithmetic.[25] Some algorithms were completely new: for example, (4, 5, 5) was improved to 76 steps from a baseline of 80 in both normal and mod 2 arithmetic.
This to me shows that direct profiling and observation wouldn't have led to the optimization. Improvements needed a sort-of but not actually brute-force effort of many people trying, but also being clever with their attempts.
"Even Apple’s LLVM fork lacks scheduling annotations for Apple Silicon. How am I supposed to write efficient code when Apple doesn’t bother to tune their own compiler?"
In addition to its public fork of LLVM, Apple also keeps a very private fork where (one assumes) they keep all the really juicy stuff.
I agree that it is frustrating not to have the data, but don't confuse "I don't have the data" with "no one has the data".
jmward01 4 hours ago [-]
I have almost always found that simple code runs faster than complex code. I think this is because optimization is likely an NP problem and like all NP problems, the best algorithm we have for solving it is divide and conquer. The core thing about D&C is that you divide until you reach a level that you can actually find the optimum answer within the resources given but accept that by dividing the problem you will likely have some high level inefficiencies. This means simple code, code you understand all pieces of, can actually be locally optimum. When I see people try to optimize code I often see them fall into that trap of optimizing too large/complex a problem which leads to them not actually being able to find a true local optimum and, often, making far slower code than had they just tried for much simpler code. This likely NP behavior runs rampant in software development where we often think we can design some elaborate process to design things and when things fail it was because people failed to follow the process and not because the problem was NP. We all love building machines which is why we likely do this, but unless you know something the rest of the world doesn't then D&C, and admitting you are only looking for a local optimum, is the only algorithm we have for attacking NP problems. (Shameless plug here for smaller, far more autonomous teams instead of monolithic dev shops with one big shared feature list)
tmoertel 3 hours ago [-]
I think that there is a different reason that an emphasis on simple code often results in faster systems. When you write simple code, you spend less time writing code. Therefore, you have more time left to invest in optimizing the very small subset of your overall system that actually matters. You didn't burn engineering resources for speed where it didn't matter.
jmward01 2 hours ago [-]
I'd argue that is a big part of the point I am making. If you take too big of a bite the time it takes to build it optimally goes up in an NP manor. If the bites are the right size then it balances the time/resources you have compared to all the other bites you make to get a locally optimal answer given all resource constraints. Long story short, cutting a problem into manageable pieces is a solid strategy. I will add one thing though, and that is that most people think they have cut things into manageable pieces but in reality they have left them too intertwined and they aren't really independent pieces. For divide and conquer to actually work requires that the pieces have clearly defined, and very limited, communication.
hinkley 2 hours ago [-]
Simplifying complex code often exposes optimization opportunities. Kernighan’s Law applies to performance as well as debugging.
tmoertel 2 hours ago [-]
Indeed!
hinkley 2 hours ago [-]
If you can’t make it faster, make it dumber.
hinkley 3 hours ago [-]
It’s like dieting. Everyone wants to hear your one weird trick and zones out when you tell them it’s hard work. Yes, mechanical sympathy is a thing but usually pulling off a series of wins isn’t luck or particularly bad work by your peers, it’s being methodical and persistent. It’s about being stubborn as fuck, and not taking no for an answer.
nostrademons 47 minutes ago [-]
This is coming from the perspective of a performance engineer whose day job is squeezing every last bit of performance out of system libraries and low-level code. This is important work, and it can pay very well if you get one of the few positions in it. But for an application developer whose primary day job is cranking out features and then spending 10% of the time at the end optimizing them, the conclusion (and the headline) very much does not hold.
For them, the systematic way to optimize goes: profile your code, and then apply domain knowledge of the product to optimize the hotspots with these common techniques:
Don't do repeated work. (If you have an expensive invariant calculation in a loop or function call, move it out, to a higher level of the program where it can be done once.)
Save your work. (Caching and memoization.)
Do less work. (Alter the product requirements to use less computationally-intensive techniques in cases where users won't notice the difference.)
Do work when the user isn't looking. (Move computations to background tasks, apply concurrency, perform async requests.)
If all else fails, call in a performance engineer like the author to micro-optimize your building blocks.
You can often get speedups of 3-6 orders of magnitude by applying these techniques, simply because the original code is so brain-dead. Performance engineers like the author tend to work on code that has already been tightly optimized, and so there is less low-hanging fruit to pick.
MatthiasWandel 6 hours ago [-]
An interesting aspect is data dependencies. If your next statement reuses data you just computed, that can cause pipeline bubbles, as that result you want to use just isn't available yet. I dived into that topic for a video about relative performance of old PCs I just published today.
jandrewrogers 5 hours ago [-]
Yes, there is non-obvious structure in some algorithms solely for the purpose of turning a single logical stream of dependent instructions into multiple concurrent streams of dependent instructions running through the same pipeline. The caveat of doing this, of course, is that it typically increases register pressure.
PaulKeeble 5 hours ago [-]
When it comes to micro optimisations the issue is partly our usual tools in algorithm analysis and hardware intuition are very far apart. Random accessing memory is very slow compared to linear, some branches are considerably worse than others and it's actually quite hard to predict how much you can improve the low level details before you start, especially for big changes. Our tools can show us where time is being lost in inefficiencies but can't help us predict how changes will improve things.
inetknght 1 hours ago [-]
Performance optimization isn't a brute-force task. It just (currently) requires a lot of skill, and it's hindered by terrible documentation; performance in software is only a brute-force task because 99% of software don't tell you what their performance impacts are.
In C++, you can achieve performance using the often-denigrated standard template library, if you would only pay attention to the documented performance requirements for given function calls. But even there it's not the panacae that it could be, because it often provides an amortized cost while handwashing the access pattern (for example: std::unordered_map is great in algorithmic cost theory and terrible in memory access patterns).
What's the algorithmic cost (big-O) of this function call? What's the memory footprint of that function call? Worse: is that documented big-O estimated across a contiguous dataset, discontiguous paged datasets, or does it have to dereference pointers? What's the network cost of a given call? It's hard to know if you don't know that `write()` to a socket could incur 40 or 400 bytes across a network, and don't know whether that network is 1mbps, 10mbps, 1gbit, or localhost, etc; and with how much latency.
For example, when I hand-rolled some x86_64 SIMD instructions to analyze some DNA, I found the Intel Intrinsics Guide [0] 1000% helpful because many of the instructions detailed what sort of performance to expect on specific processor architectures (or, at least, in general). If you read the Intel Instruction Set Reference [1], a lot of similar performance information can be found. The performance I achieved was approximately the theoretical bottleneck between CPU and main RAM; getting better performance would have required a complete algorithm change which would have been Ph.D paper publishing worthy.
Of course, sometimes even low level hardware can have performance-killing bugs.
I don't think we're in disagreement. You have to consider big-O cost, memory footprint, exact numbers, know what performance to expect from various abstractions, etc. -- and then you need to choose between multiple alternatives. The first half of this process is absolutely skill-based, but I'm arguing that when you're trying to push performance to its limit, the second half unavoidably becomes expensive and brute-forcy.
For example: do you compression data sent over the network? What level of compression do you use? Changing the data format can affect the optimal compression level, and vice versa, using a higher compression level means you can keep the underlying data simpler. For example, you can replace deserialization with zero-cost casts. But that might mean you spend more memory. Do you have that much memory? If you do, would giving that memory to the database for use as cache be better? And so on.
The individual choices are simple, but they compound and affect the overall performance in unpredictable ways. The only way to be sure you aren't missing something obvious is to check all, or at least most combinations.
enescakir 5 hours ago [-]
The hardest bugs are the ones that only show up after you “optimize.”
hinkley 2 hours ago [-]
The ones that only happen when you remove the debug statements are super fun.
raluk 5 hours ago [-]
For low level benchmarking papi https://icl.utk.edu/papi/ is great tool. It provides access for counters like cache misses, branch misses and number of cycles.
mannyv 4 hours ago [-]
Optimization is hard work because you need to understand how things work to be effective.
It's really just debugging and troubleshooting, but with a different goal in mind.
interactivecode 4 hours ago [-]
While actually measuring and profiling is great.
In my experience most webapps can fix so much low hanging performance issues by mapping the API in a way that matches how its used in the client. It can remove so much mapping and combining for data all over.
3 hours ago [-]
2 hours ago [-]
loeg 4 hours ago [-]
This is like, tip of the spear stuff, and that's very cool. But in most of the software world, I would argue, performance optimization is knocking out extremely low-hanging, obvious fruit -- it's relatively obvious what's slow, and why, and just picking any of N better approaches is good enough to eliminate the problem.
Misdicorl 4 hours ago [-]
I'll throw my hat in the ring for "disagree". Few kinds of work have such clear and unambiguous results as optimization work (its now X% faster!). Few kinds of work have such incredible and detailed tools to guide your hand in finding where to invest your effort (look at this hot loop!).
The fact that sometimes optimization work is tricky or requires some pre-thinking, or is even gasp counter-intuitive is such a hilarious way to say "this is hard". That's just table stakes starting points for so-so-so much work.
Edit: Heck, even deciding whether to prioritize optimization work or feature work is usually a harder problem than the actual optimization work itself.
4 hours ago [-]
oqtvs 4 hours ago [-]
Question coming from the article: what would be better tooling instead of profilers and MCA?
purplesyringa 2 hours ago [-]
I'd love to use a tool that shows the state of every CPU component at each point in time. Performance counters demonstrate global behavior, while what actually matters during optimization is local behavior. I'd like to be able to inspect pipeline stalls and conditions that led to these situations, I'd like to get an estimate on the efficiency of port allocation, I'd like to be able to compare the rate of memory accesses vs computation and get exact numbers, e.g. "you can access 20% more data over the bus without adding CPU stalls".
trhway 3 hours ago [-]
>Performance optimization is hard because it’s fundamentally a brute-force task, and there’s nothing you can do about it.
fundamentally disagree. First it is a building of a mental model of what happens, a kind of analysis stage, and then compare it to the mental model of how it should or could work or producing a more efficient algorithm/way of accomplishing the target task.
When people try to brute-force, lets try this or this, without having the model in mind, that is frequently a waste, and even when/if it produces some improvement there is no understanding and guarantee what use cases the improvement will cover, whether it would regress some use cases, whether it still be there after we push those new features/fixes/etc.
purplesyringa 2 hours ago [-]
The problem is that way too often, the model simply doesn't capture enough complexity to be applicable. This happens rarely during high-level optimization but is very common during microoptimization. You can build a model, and it will give you good enough results, but you won't be able to extract those last bits of performance you need to surpass SOTA.
Mawr 2 hours ago [-]
Sure, but that's a different statement. "Micro performance optimizations are fundamentally brute force" - ok.
ForOldHack 3 hours ago [-]
The word is 'Grock' you have to Grock the performance to optimize it.
My father had a PhD in Operations Research/Industrial Engneering.
hinkley 2 hours ago [-]
> Grock
Grock was a famous Swiss clown and once the highest paid entertainer in Europe.
The non-clown word you’re looking for is grok and Robert Heinlein coined it in 1961.
I've since noticed this many times. Optimizing is like meditation to me. It's very mechanical (measure), with a sprinkle of creative work (once you know what is slow, it's quite obvious how to make it faster, but just challenging enough to be engaging), and it has a very nice tight feedback loop: Something is slow. I make a change. Now it's fast. Next.
Optimizing is my happy place.
Code cleanup in general is the same for me, but it’s really hard to justify putting much time into that when running your own company solo.
Then there was a system for benchmarking the software as a whole on a wide variety of architectures, including NUMA. With lots of plots and statistics.
Usually you’d eventually end up at a point where the improvements are below the noise floor or they help on some systems and cause regression on others. The rule was usually “no regressions”
VTune for multithreading optimization. Built a fibers and lockfree system for efficient scheduling.
https://en.wikipedia.org/wiki/Biofeedback
Unfortunately I had to give up when I was still 10 times slower than the reference lol
This seems like a nonsensical statement to me. How could measuring be a substitute for thinking/analyzing/predicting/forming a plan?
Measuring/profiling just means observing the system you want to optimize in a systematic way. You certainly won't be very effective at optimizing anything if you don't observe it.
Theoretical calculations means you've formed a model of what's happening and you're devising a plan to optimize against that model. But of course, a useful model needs to represent the significant aspects of your system (and a good model should exclude most insignificant ones). Failing to observe your system means your model could be bad -- focused on insignificant aspects and missing significant ones -- and you'd never know.
Profiling to find suboptimal code is perfectly fine. Then you need to figure out how to fix it. Many people don't understand how performance optimization works, so they blindly add caching, improve constant time by invoking more low-level methods, etc. This obviously doesn't work, yet intuitively (to those people, anyway) it should produce good results.
That's why the mantra exists: don't trust your intuition, don't believe it when it says these changes improve performance, instead measure that performance and only apply changes that work. This is also perfectly fine, but this is a double-edged sword, and I've seen people go too far in this direction.
For example, they refuse to do any changes that don't immediately improve performance according to the profiler. If they modify one computation and performance decreases, they abandon this path altogether. They treat optimization as a game with a dense fog of war, and they refuse to apply deductive reasoning and, of course, intuition to apply changes that, according to the profiler at least, are not immediately rewarding.
Eg: indexing or partitioning a database table may appear to make things slower if you don't have both a representative amount of data and representative query patterns when you're measuring the change.
You should still measure your changes, but sometimes you need to be careful about measuring them in the right way, and possibly simulating a future context (eg: more scale) before drawing a conclusion.
Intuition about how the context will evolve and what effect that might have on the tradeoffs of different approaches is helpful
Adding caches or switching to lower-level calls is definitely something learned, and I wouldn't call it "intuitive".
What I think you are referring to is that sometimes, simply reading and understanding the code can tell you where the problem really is — still, my experience is that you want to measure the before and after to at least identify the general area you should be looking at, and more often than not, I could figure out what needs optimizing and how without having to get detailed profiles.
I did end up being surprised a few times, but it was mostly due to external code like buggy library implementations that didn't do the things they promised (eg. async library really synchronizing everything with a badly placed mutex).
At the same time, it's wrong to focus on a single case or a single profile (executing one code path in one set of circumstances), but what you are describing simply sounds like bad engineering — the fact that you can misuse the tool does not make the tool bad, it's still the engineer wielding it who's at fault.
Caching and lower level calls are generic solutions that work everywhere, but are also generally the last and worst way to optimise (thus why they need such careful analysis since they so often have the opposite effect).
Better is to optimise the algorithms, where actual profiling is a lesser factor. Not a zero factor of course, as a rule of thumb it’s probably still wise to test your improvements, but if you manage to delete an n^2 loop then you really don’t need a profiler to tell you that you’ve made things better.
Profilers fill in gaps in our mental model of code execution but they are not a substitute for it. Computer behavior is largely knowable from first principles, albeit requiring a considerable degree of expertise and detail. For some people in some contexts, the profiler adds little information to what they already understand. I know a few people that do almost all of their optimization work using pencil and paper with great effectiveness and precision. Not something I would recommend for most software engineers but not unreasonable at the limit either.
If you optimize infrequently, or haven't profiled code like this recently, or haven't profiled this specific codebase recently, then your mental model is probably due a refresh.
Once you “find” a candidate change you measure it to see if what you did made things worse and you put it back if it did, or maybe you try it in combination with other changes to see if its value is complementary.
But people fuck up all the time reading the initial telemetry, which is often where I come in. I get tired of hearing people say, “we’ve done all we can, look how flat this chart is,” and hand someone my beer. You won’t find all of the good candidates in the percentage of run time list. That’s not all the data that’s there, and not every change that works needs to even be supported by the initial data. It only needs to be supported by the delta afterward.
Imagine: function F() { for (i = 0; i < 10; i++) { A(); B(); C(); } }
If we profile this code, we might find out, e.g. B takes the majority of the time--let's say 90%. So you spend hours, days, weeks, making B 2X faster. Great. Now you removed 45% of execution time. But the loop in the outer function F is just a few instructions, it is not "hot"--it won't show up in profiles except for ones that capture stacks.
If you're just stuck in the weeds optimizing hot functions that show up in profiles, it's possible to completely overlook F. That loop might be completely redundant, causing 10X the workload by repeatedly computing A, B, and C, which may don't need to be recomputed.
There are bazillions of examples like this. Say you find out that a function is super, super hot. But it's just a simple function. There are calls to it all over the code. You can't make it any faster. Instead you need to figure out how to not call it at all, e.g. by caching or rethinking the whole algorithm.
> How could measuring be a substitute for thinking/analyzing/predicting/forming a plan?
This happens more than you think. Understanding how the system works in enough detail and also at a high level to formulate a plan is in short supply. Jumping in and hacking in things, like a cache or something, is surprisingly common.
One of my best examples of this, I had a function reported as still taking 10% of cumulative run time, after I’d tweaked it as much as I could. But I’d set up a benchmark that called a code path a deterministic number of times and this function was getting called twice as much as it should. I found two sibling methods asking the same question and rearranged them so the answer came as an argument and nixed the duplicate call. I reran the benchmark and instead of getting a reduction of 5% (10/2), I got 20%. That was all memory pressure.
The worst memory pressure I ever fixed I saw a 10x improvement by removing one duplicate call. Now, there was a quadratic part of that call but it was a small enough n that I expected 3x and hoped for 4x, and was as shocked as anyone when it went from 30s to 3s with one refactor.
I don't think I've ever used a profiler that couldn't report you were in F() here. One that only captures your innermost functions really doesn't seem that useful, for exactly the reasons you give.
But, in the real world. Code bases are massive. And it is hard to predict when worlds collide. Most things does not matter until they do. So measuring is the way to go I believe.
There’s so much noise at that point that even people who would usually catch problems start to miss them.
There’s usual response to this is, “well you can turn caching off to do profiling” but that’s incorrect because once people know they can get a value from the cache they stop passing it on the stack. So your function that calls A() three times that should have called it 2? You find now that it’s being called ten times.
And the usual response to that is, “well it’s free now so who cares?” Except it’s not free. Every cache miss now either costs you multiple, or much more complex cache bookkeeping which is more overhead, and every hit resets the MRU data on that entry making it more likely that other elements get evicted.
For instance in NodeJS concurrent fetches for the same resource often go into a promise cache, but now the context of the closure for the promise is captured in the cache, and it doesn’t take much to confuse v8 into keeping a bunch of data in scope that’s not actually reachable. I’ve had to fix that a few times. Hundreds of megabytes in one case because it kept an entire request handler in scope.
You do profiling because it's WAY too easy to get obsessed about theoretical problems when a simple measurement will show you the actual problems.
You do the math on the actual problem location, not a method with O(n!) which only gets called with n=3.
You still have to look at the entire call stack when profiling (which means thinking about the overarching algorithm).
If a problem area is so intuitively obvious, why would you introduce the problem in the first place? In reality, performance optimizations are usually needed where you least expect them. Which means that you can't get there intuitively. Hence, the suggestion of using profiling to help track down where the problem is instead.
Bring in a performance expert and their intuition can quickly identify many performance improvements. The tough part for the business is when and where do you pay for the experience of performant code, and when is good enough to leave alone?
Pretty much everyone I know will throw down an O(n^2) algorithm or whatever in their first pass and replace it with something more thought out once they have the time to think deeply about it.
If you're fretting about optimization at every stage of development, you're really doing it wrong. This is precisely the type of early optimization that everyone and their dog will tell you is bad. You should focus first on making your program work and laying out the logic and data flows. Optimization does not matter until the program is almost finished.
Measuring doesn't mean don't think. Measuring and thinking are two different things. You need to do them both to optimize effectively.
The fact remains most projects that do small trivial modular prototypes first will ultimately know which paths are viable before painting themselves into a corner algorithmically.
Best of luck =3
But it's also very easily to mislead yourself that way, many "optimizations" might do much less than you think. So you should avoid implementing more complex or harder to understand code just because you think it is faster, but otherwise I'd certainly try to write faster code by default in areas I know well enough to judge that.
> This seems like a nonsensical statement to me. How could measuring be a substitute for thinking/analyzing/predicting/forming a plan?
I believe the mantra “intuition doesn’t work, profile your code” is understood to mean “don't rely on your intuition alone; profile your work.” It's a warning that when it comes to performance optimization, intuition is often unreliable, so you need to supplement your mental model with actual data if you want to avoid big mistakes.
I don't know why the author of the blog post is interpreting the mantra literally, as if it claims intuition is obsoleted by profiling.
> There is no way to provide both optimized assembly and equivalent C code and let the compiler use the former in the general case and the latter in special cases.
This is true, but can be seen as a failure of language and tooling. For example, Halide [1] pioneered (AFAIK) the concept of separating algorithm from implementation at the language level. This separation lets you express the algorithm once, and then "schedule" it by specifying parallelism, vectorization, etc. You can provide multiple schedules for one algorithm, which allows you to specialize / make different choices depending on varying factors.
It's a really interesting concept, though maybe limited in practice to DSLs. I'm not sure a general purpose language would be a good fit for this model, but then, for general purpose programs written in general purpose languages, perf optimization at the level TFA discusses is frequently limited to just specific hot sections. Those hot sections could be extracted out into specialized components written in such a DSL.
1 - https://halide-lang.org/
you don't need to go all the way to Halide to do what the article is claiming isn't possible - you can do it just by including a "micro-kernel" in your library and have the code branch to that impl (depending on something at runtime) instead of whatever the C code compiled down to. this is done every single day in every single GPU lib (famously cublas ships with hundreds/thousands of these of such ukernels for gemms depending on shapes).
https://gcc.gnu.org/onlinedocs/gcc/Function-Multiversioning....
One of my early-career successes was just creating a framework for generating every permutation of perf optimizations for every (log-scaled -- clz is very fast) input size and checking which was best, dropping the results into a lookup table of function pointers to branch on. The university had a large supply of heterogeneous computers, replete with all the normal problems like being able to double floating-point addition throughput on Haswell CPUs by abusing the fmadd instruction, so I made a framework (probably closer to a DSL) for encoding your algorithms in a way that you could analyze perf tradeoffs at compile time and tune your result for the given computer. It's kind of like what ATLAS does for some linear algebra tasks.
Such practices are almost never optimal, but they're pretty easy to implement, and the results are near-optimal for almost all inputs. In the tradeoff between human and computer performance, I think it's a nice option.
Strategic optimisations is often basically free if you have domain expertise. It's that easy to know that the business wants x outcome and algorithm y is the right choice etc if its all internal thought processes. Whereas if you don't know enough then you're likely to make very expensive to undo decisions.
That doesn't seem to be what this post it talking about. It seems to talking about well worn areas trying to improve the state of the art. An example that illustrates it for me is DeepMind's AlphaTensor finding a better way to multiply matrices[0] in 2022. It wasn't a brute-force solution, but the scale of it makes it appear so.
> On 4x4 matrices, AlphaTensor unexpectedly discovered a solution with 47 multiplication steps, an improvement over the 49 required with Strassen’s algorithm of 1969, albeit restricted to mod 2 arithmetic. Similarly, AlphaTensor solved 5x5 matrices with 96 rather than Strassen's 98 steps. Based on the surprising discovery that such improvements exist, other researchers were quickly able to find a similar independent 4x4 algorithm, and separately tweaked Deepmind's 96-step 5x5 algorithm down to 95 steps in mod 2 arithmetic and to 97[24] in normal arithmetic.[25] Some algorithms were completely new: for example, (4, 5, 5) was improved to 76 steps from a baseline of 80 in both normal and mod 2 arithmetic.
This to me shows that direct profiling and observation wouldn't have led to the optimization. Improvements needed a sort-of but not actually brute-force effort of many people trying, but also being clever with their attempts.
[0] https://en.wikipedia.org/wiki/Matrix_multiplication_algorith...
"Even Apple’s LLVM fork lacks scheduling annotations for Apple Silicon. How am I supposed to write efficient code when Apple doesn’t bother to tune their own compiler?"
In addition to its public fork of LLVM, Apple also keeps a very private fork where (one assumes) they keep all the really juicy stuff.
I agree that it is frustrating not to have the data, but don't confuse "I don't have the data" with "no one has the data".
For them, the systematic way to optimize goes: profile your code, and then apply domain knowledge of the product to optimize the hotspots with these common techniques:
Don't do repeated work. (If you have an expensive invariant calculation in a loop or function call, move it out, to a higher level of the program where it can be done once.)
Save your work. (Caching and memoization.)
Do less work. (Alter the product requirements to use less computationally-intensive techniques in cases where users won't notice the difference.)
Do work when the user isn't looking. (Move computations to background tasks, apply concurrency, perform async requests.)
If all else fails, call in a performance engineer like the author to micro-optimize your building blocks.
You can often get speedups of 3-6 orders of magnitude by applying these techniques, simply because the original code is so brain-dead. Performance engineers like the author tend to work on code that has already been tightly optimized, and so there is less low-hanging fruit to pick.
In C++, you can achieve performance using the often-denigrated standard template library, if you would only pay attention to the documented performance requirements for given function calls. But even there it's not the panacae that it could be, because it often provides an amortized cost while handwashing the access pattern (for example: std::unordered_map is great in algorithmic cost theory and terrible in memory access patterns).
What's the algorithmic cost (big-O) of this function call? What's the memory footprint of that function call? Worse: is that documented big-O estimated across a contiguous dataset, discontiguous paged datasets, or does it have to dereference pointers? What's the network cost of a given call? It's hard to know if you don't know that `write()` to a socket could incur 40 or 400 bytes across a network, and don't know whether that network is 1mbps, 10mbps, 1gbit, or localhost, etc; and with how much latency.
For example, when I hand-rolled some x86_64 SIMD instructions to analyze some DNA, I found the Intel Intrinsics Guide [0] 1000% helpful because many of the instructions detailed what sort of performance to expect on specific processor architectures (or, at least, in general). If you read the Intel Instruction Set Reference [1], a lot of similar performance information can be found. The performance I achieved was approximately the theoretical bottleneck between CPU and main RAM; getting better performance would have required a complete algorithm change which would have been Ph.D paper publishing worthy.
Of course, sometimes even low level hardware can have performance-killing bugs.
[0]: https://www.intel.com/content/www/us/en/docs/intrinsics-guid...
[1]: https://www.intel.com/content/www/us/en/developer/articles/t...
For example: do you compression data sent over the network? What level of compression do you use? Changing the data format can affect the optimal compression level, and vice versa, using a higher compression level means you can keep the underlying data simpler. For example, you can replace deserialization with zero-cost casts. But that might mean you spend more memory. Do you have that much memory? If you do, would giving that memory to the database for use as cache be better? And so on.
The individual choices are simple, but they compound and affect the overall performance in unpredictable ways. The only way to be sure you aren't missing something obvious is to check all, or at least most combinations.
It's really just debugging and troubleshooting, but with a different goal in mind.
In my experience most webapps can fix so much low hanging performance issues by mapping the API in a way that matches how its used in the client. It can remove so much mapping and combining for data all over.
The fact that sometimes optimization work is tricky or requires some pre-thinking, or is even gasp counter-intuitive is such a hilarious way to say "this is hard". That's just table stakes starting points for so-so-so much work.
Edit: Heck, even deciding whether to prioritize optimization work or feature work is usually a harder problem than the actual optimization work itself.
fundamentally disagree. First it is a building of a mental model of what happens, a kind of analysis stage, and then compare it to the mental model of how it should or could work or producing a more efficient algorithm/way of accomplishing the target task.
When people try to brute-force, lets try this or this, without having the model in mind, that is frequently a waste, and even when/if it produces some improvement there is no understanding and guarantee what use cases the improvement will cover, whether it would regress some use cases, whether it still be there after we push those new features/fixes/etc.
My father had a PhD in Operations Research/Industrial Engneering.
Grock was a famous Swiss clown and once the highest paid entertainer in Europe.
The non-clown word you’re looking for is grok and Robert Heinlein coined it in 1961.