One common example of this is “if they really made a better product, it would be wildly successful.” Don’t do this. It’s empirically empty; it relies on a baseless claim about the way people (or some idealized abstract notion thereof) would in fact behave if certain conditions obtained. No explanation of why product x is unsuccessful is offered apart from the fact that people aren’t buying it, which reeks of confirmation bias.
It’s very easy for a conditional statement to be true: all that needs to be the case is that the antecedent is false, and regardless of the truth value of the consequent, you have a true proposition. So modus tollens is invoked here to deny that the product really is better by merit of the fact that the product isn’t being bought. But ‘better’ here is quite vague (and it makes it easy to slip in some normative claims, which are discussed later), making it easy for the proposition to seem intuitively plausible. What constitutes a ‘better’ product is not immediately clear, and any discussion of what makes it better must also be bracketed within what is perceived as better. Any distinction between the two is often explained away by lazy accounts of rationality: “since people are rational, they obviously know a better product when they see it” (which is itself another blithely asserted conditional claim). So the claim begins to fall apart as soon as your start to unpack it, but its intuitive appeal gives it rhetorical force.
Two examples of this:
a) Making inferences to ‘capitalism’ as the cause of things.
b) Invoking ‘evolution’ to account for various human behaviors.
In the case of the former, this is pretty close to being empirically empty. Such explanations are compatible with any state of affairs in an ostensibly capitalist society, so they explain nothing about the specific conditions with which we face ourselves. This inference also rarely concerns itself directly with the responsible parties in making decisions that are allegedly the result of capitalist pressure or ideology or class connection (or whatever else is being invoked to make the inference seem more plausible), so it also fails to explain by lack of its specificity.
In the case of the latter, it’s plausible to think of evolutionary incentives for both cooperation and competition, thus allowing both to be explainable by reference to ‘evolution’ simpliciter. Notoriously, however, the vast majority of such explanations (especially those in the secondary literature and popular/colloquial discussions) fail to identify the specific pathways by which these allegedly evolutionary traits are expressed.
Both explanations (as do all ‘just-so’ stories) fail because they rely far too much on vague concepts: any kind of empirical fact can be shoehorned into them. By relying on their own internal logic, these explanations seem more plausible to people because they dovetail nicely with certain intuitions (radical ones in the former, scientific ones in the latter). Once again, there is confirmation bias abound, because vagueness obscures falsifiability.
Often, some course of action is cautioned against by an economist on the grounds that it is ‘inefficient.’ The rhetorical force of this warning is amplified by an equivocation between two different senses of the word ‘efficient:’
a) Pareto-optimal; maximizing total wealth relative to other possible outcomes
b) Achieving desirable results with a minimum of effort or expenditure of scarce resources
When empirical data or economic models are offered in support of the claim that the planned course of action is inefficient, it’s always in the ‘thin’ sense of (a). But often people hear the stronger negative sense of (b). Indeed, there is often a tacit belief that the inefficiency entailed by (a) undermines, or takes normative priority over, the desired result of (b). For example, wealth redistribution is often criticized on the basis that it is inefficient [sense (a)]. But a more equitable distribution of wealth is the desired result implied by (b)- and it might be worth the trade-off. The equivocation between the two senses obscures the idea that a reduction in total wealth might be justified in the service of other policy goals. This is a particularly pernicious example of normative beliefs slipping in, given that economics purports to be value-free, in the manner of an ideal natural science.
Another example (and an excellent one, at that), from a different theoretical perspective:
A final problem is that hidden normative claims can increase the intuitive appeal of an argument for those who share similar normative beliefs without actually hashing out support for the argument in greater detail, or providing more evidence for empirical claims. A lot of damage can be done by hidden normativity: it can both mislead people and open an argument up to severe objections if it is recognized by a critic.