Could you elaborate on which value is the most useful?
As far as older discretes and IC's I will
ALWAYS use the typical parameter in the datasheet. The reason is that these devices were developed 20-40 years ago
before the time of even the state-of-the-art, now outmoded, class 10 clean rooms of the late 1980's, fully uncontaminated gases, pure and flat silicon wafers, consistent resist, accurate process tools with consistent repeatability, adequate process control, etc, etc. Those same devices being manufactured today, whether by the OEM or under license, are up to the parameters as stated within a very narrow statistical curve. 40 years ago 5-!0% die yields were the norm. Today 90% and above is the norm. When I retired from Intel in 2004, 96% die yield was the minimum acceptable for all devices and if it slipped, everyone was thrown into emergency mode to find the issue.
As far as the newer devices of the last 20 years, I still use the typical values, predominately, because the entire industry has had to improve to simply compete and stay viable. Imagine a manufacturer stuffing a board with robotic machines using cartridge, tape and roll component supplied parts from their suppliers. They cannot afford to test individual components, the suppliers cannot afford to test those components one by one, nor can the manufacturers. Process control is the key from start to finish.
The few exceptions are if I require matched components or those needing specific parameters/ratios met for my OWN design. In that case I will buy an adequate supply based on guess
and test/cull.
In my experience, the designs I've worked on are usually too complicated to hand analyze, and so we use software tools to tell us if they will work within parameters or not. The software give us the options to analyze the designs using "pessimistic" or "typical" estimates of device performance. I've had success designing systems using the typical parameter, and have never needed the pessimistic analysis. But I've not completed that many designs ( I"m usually a contributor, not the design lead )
There was a term that surfaced in the late '40's/early '50's; "The Tyranny of Wires". I fear a new term will emerge in the future; the tyranny of complexity. If a designer or design team cannot get their head around their own creation, the only solution would be a machine intelligence that could think faster and peer at the minutiae with more depth. Some day we might just be cleaver enough to replace humankind with all sorts of AI's to do all work, manual and creative. GAD!
I imagine the decision depends factors other than miminim and typical performance. There are, for instance, the cost of rework, the availiblity of high performing devices, etc. I can tell you this much, when I was in charge of grading parts ( for the device manufacturer ), the devices that made the grade did so in the MINIMUM criteria, so take that for what's it's worth. It might be best to buy parts the meet the design goals with their minimum specs.
I think at the root are the design rule criteria (DRC's) one person, one team or one entity establishes at the outset. The chosen DRC's may or may not vary from time to time, project to project or even down to circuit to circuit. Tradeoffs are inevitable in just about every endeavor.
Well, I've rambled enough, methinks, and it's time for some sleep. I don't know if I've contributed anything of use here, but I think the main point is to remain flexible enough to produce something of value, which is not in the disposable category.
Cheers,
Merv