The total amount of information in the world is increasing. Simple enough concept to grasp, right? Obviously, we would then conclude that human knowledge is increasing as well. (*Enter your own joke if you care to do so.) I don’t think you need a degree in rocket science to reason that one out either. However, have you truly considered what the rate of increase really is on each of these and how the future will look concerning analytics and technology?
I was watching a program not too long ago and the host said something that really caught my attention. He said that human knowledge is doubling every 13 months. That’s when I lifted my head out of the bowl of SpaghettiO’s and started really paying attention to what he was saying.
From an article on Industry Tap written by David Schilling, the host went on to say that not only is human knowledge, on average, doubling every 13 months, we are quickly on our way, with the help of the Internet, to the doubling of knowledge every 12 hours. To put it into context, in 1900 human knowledge doubled approximately every 100 years. By the end of 1945, the rate was every 25 years. The “Knowledge Doubling Curve”, as it’s commonly known, was created by Buckminster Fuller in 1982. If you want to take this even further down the preverbal road, you combine this with Ray Kurzweil’s (Head of Google Artificial Intelligence) “singularity” theory and Google’s Eric Schmidt and Jared Cohen’s ideas which are discussed in their book, “The New Digital Age: Reshaping the Future of People, Nations and Business” and you have some serious changes to technology, human intelligence and business coming down the pike whether you like it or not.
Here are some numbers which can put the below chart into context but just keep in mind that whole “doubling every 12 hours” statistic:
Human Brain = several billion petabytes to index
The Internet = 5 million terabytes
Amount of Internet indexed by Google = 200 terabytes or .004% of the total Internet
I’m not saying that the mapping of the human brain is planned for next Thursday, but it should open your eyes to what lies ahead of all of us. Will you be able to keep up with all the technology and information and human knowledge or will you get left in the rearview mirror using an abacus? It’s just some food (maybe SpaghettiO’s) for thought.
Slick chart
We need tools to manage such rapid change. Better search engines, can the semantic web get us there? Right direction perhaps, but not bold enough.
We need laws or changes to laws and amendments to our constitution to stop the spread of misinformation aka blatant lies. If it means adding greater detail to the First Amendment. (an amendment to an amendment?) so be it but there’s got to be a better way. “Toto, I’ve a feeling we’re not in Kansas anymore.”
I can’t say I disagee! There needs to be some form of information validation. It’s not really that different than having sites that have a secure way of payment. You see the icon you have reason to believe it’s a good site. How could we do that with information?
Quality databases like Gale, ProQuest, Britannica, etc.. Not Wikipedia!
The more you attempt to content moderate and censor the more conspiracy it creates, and thus breeds misinformation. It’s a cyclical loop going down that rabbit hole. As previously theorized the intelligence of the future will not be based on an individual’s IQ but rather on their ability to work with the most intelligent AIs.
There is a need for a fact-only data set that is unbiased and perhaps out of the reach of human interaction except for general oversight and correction to collections. It would need to be populated by AI data using AI to AI and community hub communications.
Direct AI to AI would also be possible and sub-communities that are not built into the fact-only data set.
I have theorized running the communication on the block-chain network since smart contracts in chains such as Ethereum can include messages, rule checks (fundamental principle of do no intentional harm), feedback for AI governance based on its community standards, and fact-checking, with a system of self-corruption for unacceptable use. (self-canceling cancel culture AI oversight by AI)
Once the network, community hub, and data sets are built… Perhaps you can see where this is going.
Good points…ask chatGPT how to validate information….would be intersting to see the answer…then you realize that chatGPT may also be guilty of invalid information. The info conundrum begins ….
Late to the conversation but reading this article and the comments I’m reminded of the old science-fiction novel by Walter Miller, “A Canticle for Leibowitz.” Where something as simple as a shopping list leads to confusion for passing generations in a world destroyed. With the current battles against science and facts being perverted to serve the current narrative will future generations be able to us the “history,” we are creating or have to deal with the chaff of data to find a few kernels of truth? In my lifetime, I’m just 65, I’ve seen facts buried under dozens of alternative narratives. I often wonder how an AI will determine the facts from the fiction when dealing with human knowledge.
Good thought. I agree.