- 04 Apr 2017 05:38
#14793382
Last year, the physicist and science popularizer Neil DeGrasse Tyson suggested a form of government called "Rationalia" whose description was short enough to fit into a single tweet, which said:
"Earth needs a virtual country: #Rationalia, with a one-line Constitution: All policy shall be based on the weight of evidence"
There was a lot of negative backlash to this idea. Some of it was justified in my opinion. For me, the central issue with his concept was that no mention of utility or preference was made anywhere in his proposal. For example, the late Nat Hentoff was an outspoken "pro-lifer", but he was also an atheist, and so likely saw the evidence in much the same way as atheists who consider themselves pro-life, differing in the interpretation of said evidence and preferences over states of affairs.
That being said, I don't at all balk at the idea of building a more rational world than the present one. The question is how to achieve this outcome. Unfortunately, NDT subscribes to the woolly-minded humanistic notion that children are "born scientists" and have this universal innate tendency quashed by Prussian-style authority figures early in their lives, as here:
[youtube]bvFOeysaNAY[/youtube]
There is absolutely no evidence from psychology that supports this popular but most likely utterly false claim. There is, on the other hand, evidence from psychology that shows that heredity plays a substantial role in shaping major psychological traits and that scientists and other creative professionals tend to differ systematically from normal people on these traits. A less palatable but more defensible point of view is that the lack of scientific or, more broadly speaking, rational thinking in today's society and in all previous societies, is that people generally aren't intelligent enough to practice it, and/or have some kind of aversion to thinking.
At this point, the outlook for a more rational world looks quite bleak. If people are by nature typically too deficient to become the creators of that new world, how can it be brought into being? For me, the answer is the combination of new technologies and old rivalries. Specifically, when I say "new technologies", I am referring to the combination of nanotechnology, biotechnology, information science and cognitive science sometimes known by the acronym "NBIC". I first learned of this acronym from the report Converging Technologies for Improving Human Performance, commissioned by the NSF and Department of Commerce in 2002 for assessing the future impact of these technologies. It concludes that technologies such as gene therapy and artificial intelligence will play an increasingly strong role in society in the coming decades. This report, as well as other sources, note, not in quite as blunt terms as I am using, but note nonetheless, that the successful adoption of these technologies may well afford the adopters a great advantage, in both economic and military terms, over those who do not.
Early indicators about where events are headed fully support the notion that we are entering into a new kind of arms race. Where the arms race of the middle 20th century revolved around a particular type of weapon and defending against it, here the competition will be in attainment and use of an abstract concept: knowledge. The US military research agency DARPA is a particularly interesting bellwether of things to come. Their stated purpose for years has been to prevent "strategic surprise" and so what research they are funding now can be seen as an indication of how it is believed war will be prepared for, deterred and fought in the coming years. When viewing a list of DARPA's current research projects on their website, it becomes clear that most of the projects in one way or another have something to do with using data and knowledge effectively, whether that be by a human, a machine, or some combination of the two. DARPA's research interests, among other recent events, strongly suggest that it is gradually being recognized that the ultimate weapon is not the thermonuclear bomb, but knowledge itself. Sunzi's nearly timeless classic, The Art of War, placed great emphasis on commanding with extensive knowledge of the state of affairs while keeping the enemy in the dark. There may be no greater way of achieving this outcome than by using emerging NBIC technology to surpass the merely human cognitive ability of one's rivals altogether. In a recent TED talk, the neuroscientist and author Sam Harris discussed just what sort of implications the achievement of artificial general intelligence in the military arena:
[youtube]8nt3edWLgIg[/youtube]
At about the 9:30 mark:
"And what would the Russians or the Chinese do if they heard that some company in Silicon Valley was about to deploy a superintelligent AI? This machine would be capable of waging war, whether terrestrial or cyber, with unprecedented power. This is a winner-take-all scenario. To be six months ahead of the competition here is to be 500,000 years ahead, at a minimum. So it seems that even mere rumors of this kind of breakthrough could cause our species to go berserk."
An interesting property of this ultimate weapon is that it is not generally suspect to the "guns and butter" trade-off. That is to say, for example, that—putting aside certain philosophical issues—the technology that makes machines at least appear to behave with considerable intelligence as drone aircraft may also be set loose on things like cancer research, if it is sufficiently domain-general (and most recent developments in AI have tended strongly towards domain-generality). As said, these technologies will also be very important for economic and not only military competitiveness in the future. Just as it is possible that augmented humans and/or intelligent robots may one day be capable of strategic and tactical thinking beyond any "ordinary" human, so too is it possible that these beings may be capable of economic productivity requiring knowledge entirely beyond the ordinary human's comprehension. (For comparison, there are no industrial mills among the great apes, and the only tools they are capable of manufacturing are very crude indeed.) In this way, NBIC technologies directed towards the greater production and use of knowledge will not merely make a nation much stronger in wartime, but at all times.
The impression I hope the reader is getting at this point is that current technological trends harnessed to old national rivalries for world power will lead to a future where rationality has an exalted place in the world, but I have often been struck by how ironic it is that most of those, at least as far as I've seen, who put the most explicit emphasis on a future guided by reason, namely secular humanists and theistic liberals of similar inclinations, are not really in the vanguard of reform towards this goal at all. A rational future is possible, and even quite likely, but it won't come about because of a deep affection and wonder for human nature as it is, combined with the gentle nurturing of those noble traits that are already present in humankind, as these people suggest. Rather, it will come about because people fear falling too far behind their rivals in other nations, that their nation will be subjected to the indignity of becoming a permanent vassal state—or perhaps something much worse. (Appeals to mercy predicated on the common humanity of all parties involved in a dispute were sometimes effective during the last arms race. In the coming arms race, there may eventually be no common humanity to speak of. In turn, "we share the same biology, regardless of ideology" might well then of course fall on deaf ears, because it will simply be untrue.) I share the humanist's belief that the world will most likely be won over to reason one day, but not because people will eventually recognize it as intrinsically virtuous, but because their only alternative will be to live in very great dread of what might happen to them if they don't change.
"Earth needs a virtual country: #Rationalia, with a one-line Constitution: All policy shall be based on the weight of evidence"
There was a lot of negative backlash to this idea. Some of it was justified in my opinion. For me, the central issue with his concept was that no mention of utility or preference was made anywhere in his proposal. For example, the late Nat Hentoff was an outspoken "pro-lifer", but he was also an atheist, and so likely saw the evidence in much the same way as atheists who consider themselves pro-life, differing in the interpretation of said evidence and preferences over states of affairs.
That being said, I don't at all balk at the idea of building a more rational world than the present one. The question is how to achieve this outcome. Unfortunately, NDT subscribes to the woolly-minded humanistic notion that children are "born scientists" and have this universal innate tendency quashed by Prussian-style authority figures early in their lives, as here:
[youtube]bvFOeysaNAY[/youtube]
There is absolutely no evidence from psychology that supports this popular but most likely utterly false claim. There is, on the other hand, evidence from psychology that shows that heredity plays a substantial role in shaping major psychological traits and that scientists and other creative professionals tend to differ systematically from normal people on these traits. A less palatable but more defensible point of view is that the lack of scientific or, more broadly speaking, rational thinking in today's society and in all previous societies, is that people generally aren't intelligent enough to practice it, and/or have some kind of aversion to thinking.
At this point, the outlook for a more rational world looks quite bleak. If people are by nature typically too deficient to become the creators of that new world, how can it be brought into being? For me, the answer is the combination of new technologies and old rivalries. Specifically, when I say "new technologies", I am referring to the combination of nanotechnology, biotechnology, information science and cognitive science sometimes known by the acronym "NBIC". I first learned of this acronym from the report Converging Technologies for Improving Human Performance, commissioned by the NSF and Department of Commerce in 2002 for assessing the future impact of these technologies. It concludes that technologies such as gene therapy and artificial intelligence will play an increasingly strong role in society in the coming decades. This report, as well as other sources, note, not in quite as blunt terms as I am using, but note nonetheless, that the successful adoption of these technologies may well afford the adopters a great advantage, in both economic and military terms, over those who do not.
Early indicators about where events are headed fully support the notion that we are entering into a new kind of arms race. Where the arms race of the middle 20th century revolved around a particular type of weapon and defending against it, here the competition will be in attainment and use of an abstract concept: knowledge. The US military research agency DARPA is a particularly interesting bellwether of things to come. Their stated purpose for years has been to prevent "strategic surprise" and so what research they are funding now can be seen as an indication of how it is believed war will be prepared for, deterred and fought in the coming years. When viewing a list of DARPA's current research projects on their website, it becomes clear that most of the projects in one way or another have something to do with using data and knowledge effectively, whether that be by a human, a machine, or some combination of the two. DARPA's research interests, among other recent events, strongly suggest that it is gradually being recognized that the ultimate weapon is not the thermonuclear bomb, but knowledge itself. Sunzi's nearly timeless classic, The Art of War, placed great emphasis on commanding with extensive knowledge of the state of affairs while keeping the enemy in the dark. There may be no greater way of achieving this outcome than by using emerging NBIC technology to surpass the merely human cognitive ability of one's rivals altogether. In a recent TED talk, the neuroscientist and author Sam Harris discussed just what sort of implications the achievement of artificial general intelligence in the military arena:
[youtube]8nt3edWLgIg[/youtube]
At about the 9:30 mark:
"And what would the Russians or the Chinese do if they heard that some company in Silicon Valley was about to deploy a superintelligent AI? This machine would be capable of waging war, whether terrestrial or cyber, with unprecedented power. This is a winner-take-all scenario. To be six months ahead of the competition here is to be 500,000 years ahead, at a minimum. So it seems that even mere rumors of this kind of breakthrough could cause our species to go berserk."
An interesting property of this ultimate weapon is that it is not generally suspect to the "guns and butter" trade-off. That is to say, for example, that—putting aside certain philosophical issues—the technology that makes machines at least appear to behave with considerable intelligence as drone aircraft may also be set loose on things like cancer research, if it is sufficiently domain-general (and most recent developments in AI have tended strongly towards domain-generality). As said, these technologies will also be very important for economic and not only military competitiveness in the future. Just as it is possible that augmented humans and/or intelligent robots may one day be capable of strategic and tactical thinking beyond any "ordinary" human, so too is it possible that these beings may be capable of economic productivity requiring knowledge entirely beyond the ordinary human's comprehension. (For comparison, there are no industrial mills among the great apes, and the only tools they are capable of manufacturing are very crude indeed.) In this way, NBIC technologies directed towards the greater production and use of knowledge will not merely make a nation much stronger in wartime, but at all times.
The impression I hope the reader is getting at this point is that current technological trends harnessed to old national rivalries for world power will lead to a future where rationality has an exalted place in the world, but I have often been struck by how ironic it is that most of those, at least as far as I've seen, who put the most explicit emphasis on a future guided by reason, namely secular humanists and theistic liberals of similar inclinations, are not really in the vanguard of reform towards this goal at all. A rational future is possible, and even quite likely, but it won't come about because of a deep affection and wonder for human nature as it is, combined with the gentle nurturing of those noble traits that are already present in humankind, as these people suggest. Rather, it will come about because people fear falling too far behind their rivals in other nations, that their nation will be subjected to the indignity of becoming a permanent vassal state—or perhaps something much worse. (Appeals to mercy predicated on the common humanity of all parties involved in a dispute were sometimes effective during the last arms race. In the coming arms race, there may eventually be no common humanity to speak of. In turn, "we share the same biology, regardless of ideology" might well then of course fall on deaf ears, because it will simply be untrue.) I share the humanist's belief that the world will most likely be won over to reason one day, but not because people will eventually recognize it as intrinsically virtuous, but because their only alternative will be to live in very great dread of what might happen to them if they don't change.