Other than fraud and deception, what other factors limit the rate of accumulation of scientific knowledge? To learn more, read this Special Article, in which I introduce the idea of a human society knowledge singularity, and the concept of “-k” (‘negative k’). I think you will find it answers some questions and raises others. Post your own opinion/questions.
I have always been fascinated by the Limits of Knowledge.
Let’s say a civilization has F funds to start the process of scientific discovery toward the end of understanding the world and the Universe around it.
They can show their value of basic research (vB), applied research (vA), and technology development (vT) by the relative investment into each of those three areas. (vB, vA and vT are arbitrary units of importance reflecting how we spend our time, not dollars spent. They thus determine the disbursement into basic, applied and technology).
This society’s scientists have a fix rate translating new findings into something of value for society at a rate of TS (translational success; range 0-1).
The society can re-invest funds from the value of the last years’ k into basic research, applied research and technology development at whatever rate (0%-100%) or beyond 100% if they are seriously into research.
Assume that advances in technology have a multiplicative effect on both basic and applied research, and that the effect is equal on both types of research.
With the simple model
k(i) = k(i-1)+TS*RI*(vT*(vBS+vAS)+(vBA x vBS))
we can learn some surprising things about the parameters that influence the rate of accumulation of knowledge by exploring some scenarios.
With more classical notation, the equation reads:
The Dark Ages
Let’s say anti-science sentiment grows to the point where no matter what scientists learn, and manage to translate into something useful to society, society just won’t or can’t invest. Knowledge tends to increase at a slow rate, and then hold at a steady state.
Let’s say for this example, society is really stingy, and re-invests only 25% of the gains from science into more research. In the first decade, there is a jump in k, but it levels out, and for the next century, that society effectively learns nothing new.
Note the scale of the Y-axis as we change conditions. This analysis assumes a fairly high translational success rate of 30%. So the deserving, beleaguered, underpaid and underappreciated scientists are somehow nevertheless contributing value. Society is eating is seed corn.
Now assume a society in which investment is increased – generously so – such that 80% of new revenues that result from advances in science are re-invested. Assuming the same translational success rate of 33%, what do we see?
Even at 80% re-investment, we see only a modest increase in overall knowledge, and it remains fixed. One would think that such a high re-investment rate would cause at least a linear increase, but it doesn’t. Now, under these two scenarios, society values applied and basic research the same amount as technology (arbitrary value units of 10 each).
Given that technology has a multiplicative effect, what happens if we double the value of technology relative to applied and basic research (vA 10, vB 10, vT 20)? Let’s keep society investing at 80%:
Ah, that’s a bit more impressive – but we’re still not doing something right. Knowledge is flat – that is, in spite of 100 years of effort, we stay at a stable rate. The investment goes in, but nothing new comes out.
Let’s say the society has fallen on good times, either by luck or by conquest, and has an excess of wealth. Under the conditions above, if it starts actively investing, say, 120% (RI = 1.2), what happens?
Bummer. We see only a modest increase again. We still learn more, but we hit a k-ceiling. We want growth, not stagnation. So simply throwing even money at the problem at a rate of 120% is no guarantee of perpetual growth in knowledge.
But let’s say the society continues it economic growth, and ups the RI to 3 (yes, 300 x the revenue realized from scientific discoveries):
Well then. Not that it is theoretically possible with fixed infrastructure, but in the first two years of very, very generous investment we have surpassed the growth of k under previous conditions. But note the model still goes (eventually) to a fixed amount of k. It’s not even (overall) a linear increase. We may be delighted in the first couple of decades on what we’ve learned, but eventually, that stodgy feeling of stagnation will overcome us, and we’ll have a huge, expensive infrastructure that eventually cannot justify its own existence in terms of k.
There exists a set of conditions under which a linear positive growth in knowledge occurs. It turns out that there is an interplay between the rate of return on investment, RI, and the translational success rate, TS, such that whenever RI = 1/TS, we can see a continuous growth in knowledge. So for TS = 0.1, you need an RI of 10.0; for TS = 0.2, you need an RI of 5.0; for TS of 0.3, you need an RI of 3.33 and so on for fixed linear increase. That looks like this (I won’t plot them all, but two major observation is that if you match RI to TS, (1) you break the ceiling, and (2) you end up at the same place in terms of k after 100 years regardless of TS. (It is obvious that inept scientists cost more in the long run).
Becoming Hypersapient – A Knowledge Explosion
So the next logical question would be: which combinations of TS and RI would lead to exponential growth of knowledge – our own singularity – in which science and technology work together to blow the lid off the limits – cultural ethical limits be damned, let society keep up, we just want to know things?
Once you have matched RI to 1/TS, you can do one of three things to go exponential:
(a) Increase RI such that > 1/TS (invest more money)
(b) Increase TS such that RI > 1/TS (train scientists how to translate their discoveries better)
(c) Increase RI and TS such that RI > 1/TS.
Let’s take the pair RI = 2, and TS = 0.5. Increase RI to 2.1:
Ah, that’s cool. We can know far more now. Yes, society is investing heavily, and our scientists are really good. Translational success of 0.5 is pretty high. What if we kept RI at 2, and increased TS to 0.6?
Incredibly, the TS increase of from 0.5 to 0.6 blows the RI contribution increase from 2.0 to 2.1 out of the water. It takes two decades to learn what it would have taken 100 years. What would that be like? Who could keep up? And how could we convince others that what we now know supplants what they thought they knew? Ah, culture drag, a missing parameter from our model.
Let’s ignore that for a moment, and assume that has a technology fix.
What if we take the third option, and increase RI to 2.1 and increase TS to 0.6?
Well, it still takes two decades to reach k = 500,000, but in the long run, things are really off the hook:
Ok, so our children’s grandchildren’s children will know 121 million times more than our children. You know what’s really cool? We can’t fathom what that means. That’s not even scary, scary does not qualify. For me, contemplating that is like standing on the edge of an engraved monolith on a moon circling Saturn, feeling the planetary gravity pulling me ever so softly off the surface, just the last moment before I can no longer feel anything under my feet…
They will have to invent a word for how we should feel about that.
Safe Hacks on Knowledge
The next logical question is: what is the lowest RI we can make and achieve exponential knowledge growth? How good can our scientists get at translation?
Well, for any RI < 1, scientists would have to become multipliers and exponentiators themselves in terms of TS because every discovery would have to have > 1 translational success. That’s not possible. And accountants don’t actually track say, market profits from each scientific discovery in all of its applications – they can’t, because of the diffuse nature of translational applications.
There are draws on RI as well; you have to pay your administrators, and if you’re corporate, you have to pay dividends to investors in an increasingly large-numbers markets. (Did you know I feel your pain? Only a little though, you’re well-off enough).
But for a modest RI, like 1.1, TS of 0.95 goes non-linear, but with the massive payoff in k in 100 years.
And our scientists just aren’t that talented. No way.
What about investing in technology to feed a multiplier? To get to k=40, we need to invest 200 into technology, valuing basic and applied at 10 (graph not shown). Now remember “invest” here means “value”, which means what we spend our time doing, not how much we spend.
Let’s say our scientists can at best do TS = 0.3. RI then has to be 350% (3.5). Places like Dubai can do this. Google can do this. Amazon can do this. It’s only a question of money, and training scientists on how to translate their knowledge into something useful to society. Truly useful. Not made to appear to be useful. Useful in the sense that it can position people to be better able to learn the explosion of knowledge that could result. That’s the funny thing about future knowledge. Every western civilization thinks they are the zenith of human development – that no more knowledge can be had. That’s like saying no more songs can ever be written; that all musical instruments that could be conceived of and played have been already. Nonsense. Hubris. Baseless induction.
For those who want to tweak the parameters or vary the model, here is an Excel file:
Please credit “kmax model, James Lyons-Weiler, personal communication”, link to this Special Article, and contact me and let me know about your modeling and analyses!
If you love vaccines and like me up to this point, stop reading now. Just kidding. You can do both. I know you can.
Evaluating a Current Science – Vaccines
The technology being used in vaccines has stagnated. Globally it’s a $25 billion dollar a year market. Where would we be if a tiny amount of those billions went into finding ways to make them safer? Perhaps the part of society that fears mild childhood conditions like measles, mumps, and chickenpox so much that they have accepted the thoughts of hatred for people who choose to not engage in the program, and to hate those who want newer, safer technologies, perhaps those people who hate their otherwise virtually identical compatriots could see that the real culprits are those obsconding with the funds necessary for more research, as mandated under the 1986 National Childhood Vaccine Injury Act to make vaccines safer and to identify those who are most risk of vaccine injury.
There are people who believe (or claim to believe) that no vaccine injuries ever occur. That’s -k. The massive amount of evidence that points to vaccines contributing to neuro- and immunological conditions is overwhelming. I refuse to participate in -k. Vaccine risk and injury denialism is anti-science, and I won’t have any part of it. Billions have been paid out via the National Vaccine Compensation Program – and taxes on the vaccines pay for it. FDA is now dose-escalation testing new adjuvants- but not existing adjuvants like aluminum hydroxide, and AAHS. The HPV trials used AAHS as the placebo – an invalid placebo and thus they have not been sufficiently tested for safety. If you doubt this, wait until next week.
Fraud and deception obviously reduce TS and k. We can, in the name of knowledge, do better. In case this starts a movement in Science to “refuse to participate in -k”, and to abandon Science-Like Activities, in this age of icons, here you go. -k is anti-fraud, anti-deception, pro-science, pro +k! Royalties for non-licensed uses (t-shirts, coffee mugs) are welcome as donations either to IPAK or to help with Unbreaking Science. I can’t translate this into funds myself, I don’t have time. I’d get a kick out of it becoming a thing. I’m off to +k!