Greedy Google blocks the global internet superorganism mind: many references

28 Aug

History of global digital superorganism goes far back. This is our future. I put some background information for my followers here. This is going to free our world from slavery of money, employment and all kinds of evils. It will prepare our world to extraterratial dangers like cosmic collision or radiation etc. Unfortunately, greedy Google is blocking the automatic internet search engine to turn internet into a global superorgamism mind. Read here please.

———————————————————————–

Profile of Valentin Turchin

By Ben Goertzel August 4, 2000

The Russian philosopher-scientist Valentin Turchin holds a unique position in the history of the Internet.   He was the first of the cyber-gurus: The expansion of computer and communication networks that he foresaw in his 1970 bookThe Phenomenon of Science” is now a reality; and the trans-human digital superorganism that he prophesied to emerge from these networks, is rapidly becoming one.  But unlike most scientists who turn toward philosophy in mid-life, Turchin was not satisfied to be a grand old theorist.   Now in his 70’s, he is playing an active role in making his vision of an Internet superorganism come true, leading an Internet start-up company, Supercompilers LLC, which applies the same cybernetic principles he used to study the future of humanity to create computer programs that rewrite other programs to make them dozens of times more efficient – and even rewrite themselves.

Turchin holds three degrees in theoretical physics, obtained in 1952, 1957, and 1963; and the first decade his career was devoted to neutron and solid state physics.  But in the 60’s his attention drifted toward computer science, far before computers became fashionable.  He created a programming language, REFAL, which became the dominant language for artificial intelligence in the Soviet bloc.  Apart from any of his later achievements, his work on REFAL alone would have earned him a position as one of the leaders of 20’th century computer science. 

But it was the political situation in the Soviet Union that drew him out of the domain of pure science.  In the 1960’s he became politically active, and in 1968 he authored “Inertia of Fear and the Scientific Worldview”, a fascinating document combining a scathing critique of totalitarianism and the rudiments of a new cybernetic theory of man and society.   Not surprisingly, following the publication of this book in the underground press, Turchin lost his research laboratory. 

His classic “The Phenomenon of Science,”  published two years later, enlarged on the theoretical portions of “Inertia of Fear,” presenting a unified cybernetic meta-theory of universal evolution.  The ideas are deep and powerful, centered on the notion of a MetaSystem Transition, a point in the history of evolution of a system where the whole comes to dominate the parts.   Examples are the emergence of life from inanimate matter; and the emergence of multicellular life from single-celled components.   He used the MetaSystem Transition concept to provide a global theory of evolution and a coherent social systems theory, to develop a complete cybernetic philosophical and ethical system, and to build a new foundation for mathematics.  The future of computer and communication technology, he saw, would bring about a MetaSystem transition in which our computational tools would lead to a unified emergent artificial mind, going beyond humanity in its capabilities.  The Internet and related technologies would spawn a unified global superorganism, acting as a whole with its own desires and wishes, integrating humans to a degree as yet uncertain.

By 1973 he had founded the Moscow chapter of Amnesty International and was working closely with Andrei Sakharov.   The Soviet government made it impossible for him to stay in Russia much longer.  In 1977, persecuted by the KGB and threatened with imprisonment, he was expelled from the Soviet Union, taking refuge in the US and joining the Computer Science faculty of the City University of New York, where he continued his philosophical and scientific work.  Among other projects, in the mid-1980’s he created the concept of supercompilation, a novel technique that uses the  meta-system transition concept to rewrite computer programs and make them more efficient. 

Initially the supercompilation technique was only applied to REFAL.  The most popular commercial language, C++, is not suitable for supercompilation due to being so closely tied to the specifics of computer hardware.  But when Java came on the scene in the mid-90’s, Turchin quickly realized that it was a natural match with his supercompilation idea.  It was the first widely commercialized programming language that was supercompilable.  He gathered  several of his Russian friends, including Andre Klimov, a former student, and Yuri Mostovoy, a seasoned Wall Street professional, and did what so many others were doing at the same time: started his own company. 

It wasn’t exactly the typical New York Internet start-up.  Instead of a bunch of 20-year-old Webheads starting a design company, you had a bunch of Russian scientists varying in age from 30 to 75, half located in the New York area and half located in Moscow, starting a company based on a profound technological innovation, a Java supercompiler that promises to make Java programs run 100 times their current speed.   But this was the beauty of the Internet stock bubble.  In times of plenty, anything goes, including a lot of .com nonsense, and some things of true power and beauty as well.

The impact of Turchin’s supercompiler can hardly be exaggerated.  Java is rapidly becoming the standard programming language of the Net – so the Java supercompiler, when widely deployed, will make the Net get 100 times smarter, 100 times more sophisticated – using the same exact computer hardware.  Fascinatingly, it seems this particular technological advance could only have been developed in Russia, where hardware advances were slow and everyone was always using inefficient, obsolete computers – thus making  ingenious methods for speeding up programs extremely important.   The supercompiler will make the metasystem transition from the Internet today to the emerging Internet supermind come a heck of a lot faster – maybe even in Turchin’s lifetime.  It will be launched on the market sometime in 2001 – an auspicious year for the Internet to begin its dramatic acceleration in efficiency and sophistication.

While Americans tend toward extreme positions about the future of the cyber-world – Bill Joy taking the pessimist’s role; Kurzweil, Moravec and others playing the optimist – Turchin, as he and his team work to advance Net technology, views the situation with a typically Russian philosophical depth.  He still wonders, as he did in “The Phenomenon of Science,” whether the human race might be an evolutionary dead-end, like the ant or the kangaroo, unsuitable to lead to new forms of organization and consciousness.  As he wrote there, in 1970, “Perhaps life on Earth has followed a false course from the very beginning and the animation and spiritualization of the Cosmos are destined to be realized by some other forms of life.”  Digital life, perhaps?   Powered by Java, made possible by supercompilation?

“The Phenomenon of Science” closes with the following words: “We have constructed a beautiful and majestic edifice of science. Its fine-laced linguistic constructions soar high into the sky. But direct your gaze to the space between the pillars, arches, and floors, beyond them, off into the void. Look more carefully, and there in the distance, in the black depth, you will see someone’s green eyes staring. It is the Secret, looking at you.”

This is the fascination of the Net, and all the other new technologies spreading around us and – now psychologically, soon enough physically – within us.  It’s something beyond us, yet in some sense containing us – created by us, yet creating us.  By writing books and supercompilers, or just sending e-mails and generally living our tech-infused lives, we unravel the secret bit-by-bit — but we’ll never reveal it entirely.

————————————————-

Global brain

From Wikipedia, the free encyclopedia
 
Jump to: navigation, search

The Global Brain is a metaphor for the worldwide intelligent network formed by people together with the information and communication technologies that connect them into an “organic” whole. As the Internet becomes faster, more intelligent, more ubiquitous and more encompassing, it increasingly ties us together in a single information processing system, that functions like a “brain” for the planet Earth.

Although the underlying ideas are much older, the term was coined in 1982 by Peter Russell in his book The Global Brain. How the Internet might be developed to achieve this was set out in 1986 “Information routeing groups – Towards the global superbrain: or how to find out what you need to know rather than what you think you need to know”.[1] The first peer-reviewed article on the subject was written by Mayer-Kress and Barczys in 1995.

Francis Heylighen, who contributed much to the development of the concept, distinguished in [2] three different perspectives on the global brain, organicism, encyclopedism and emergentism, that developed relatively independently but that now appear to have come together into a single conception.

Contents

[hide]

[edit] Organicism

In 1911, entomologist William Wheeler developed the concept of the ant colony as a spatially extended organism, and in the 1930’s he coined the term superorganism to describe such an entity.[3]. The superorganism concept is both a critical antecedent to and a subset of the organic concept of a global brain. The global brain as an organic process was perhaps first broadly elaborated by paleontologist and Jesuit priest Pierre Teilhard de Chardin. In 1945, he described a coming “planetisation” of humanity, which he saw as the next phase of accelerating human “socialisation” (British spellings). Teilhard described both socialization and planetization as irreversible, irresistible processes of macrobiological development culminating in the emergence of a noosphere, or global mind (see Emergentism below)[4].

In 1993, Gregory Stock proposed a modern vision of superorganism formed by humans and machines, which he calls “Metaman“. In this organic metaphor, the analogue of the nervous system is the global brain. The exchanges of information on Earth are processing at a high rate and speed, similar to the functioning of a nervous system.

[edit] Encyclopedism

In the perspective of encylopedism, the emphasis is on developing a universal knowledge network. The first attempt to create such an integrated system of the world’s knowledge was the Encyclopédie of Diderot and d’Alembert. However, by the end of the 19th century, the amount of knowledge had become too large to be published in a single synthetic volume. To tackle this problem, Paul Otlet founded the science of documentation, now called information science, eventually envisaging a World Wide Web-like interface that would make all the world’s knowledge available immediately to anybody. H. G. Wells proposed the similar idea of a collaboratively developed world encyclopedia, which he called a World Brain. Nowadays this dream of a universal encyclopedia seems to become a reality with Wikipedia.

Tim Berners-Lee, the inventor of the World Wide Web, too, was inspired by the free associative possibilities of the brain for his invention. The brain can link different kinds of information without any apparent link otherwise; Lee thought that computers could become much more powerful if they could imitate this functioning, i.e. make links between any arbitrary piece of information.[5]

[edit] Emergentism

This approach focuses on the emergent aspects of the evolution and development of complexity, including the spiritual, psychological, and moral-ethical aspects of the global brain. This is at present a particularly abstract and speculative domain. The global brain is here seen as a natural and emergent process of planetary evolutionary development. Here again Pierre Teilhard de Chardin attempted a synthesis of science, social values, and religion in his The Phenomenon of Man, which argues that the telos (drive, purpose) of universal evolutionary process is the development of greater levels of both complexity and consciousness. Teilhard proposed that if life persists then planetization, as a biological process producing a global brain, would necessarily also produce a global mind, a new level of planetary consciousness and a technologically-supported network of thoughts which he called the noosphere. Teilhard’s proposed technological layer for the noosphere can be interpreted as an early anticipation of the Internet and the Web. [6]. Physicist and philosopher Peter Russell elaborates a similar view, and stresses the importance of personal spiritual growth, in order to build and to achieve synergy with the spiritual dimension of the emerging superorganism.

Systems theorists commonly describe the emergence of a higher order system in evolutionary development as a “metasystem transition” (a concept introduced by Valentin Turchin) or a “major evolutionary transition”.[7]

[edit] Application in Management

The term “Global Brain” has also been applied recently in the management field to reflect the global innovation network that companies can tap into to enhance their innovation agenda. In this perspective, the term relates to the global network of scientists, independent inventors, academic researchers, customers, suppliers, as well as different types of innovation intermediaries who facilitate the innovation process (for example, idea scouts, innovation capitalist, etc.).[8]

[edit] In fiction

The global brain is a recurrent theme in many fictional works, particularly science fiction. Notable examples include the Borg from Star Trek and the Wired from Serial Experiments Lain, some aspects of the global brain are also explored in The Matrix films.

——————————————————–

Evidence of a Global SuperOrganism

New Kind of TruthThe Ninth Transition of Evolution

I am not the first, nor the only one, to believe a superorganism is emerging from the cloak of wires, radio waves, and electronic nodes wrapping the surface of our planet. No one can dispute the scale or reality of this vast connectivity. What’s uncertain is, what is it? Is this global web of computers, servers and trunk lines a mere mechanical circuit, a very large tool, or does it reach a threshold where something, well, different happens?

So far the proposition that a global superorganism is forming along the internet power lines has been treated as a lyrical metaphor at best, and as a mystical illusion at worst. I’ve decided to treat the idea of a global superorganism seriously, and to see if I could muster a falsifiable claim and evidence for its emergence.

My hypothesis is this: The rapidly increasing sum of all computational devices in the world connected online, including wirelessly, forms a superorganism of computation  with its own emergent behaviors.

Superorganisms are a different type of organism. Large things are made from smaller things. Big machines are made from small parts, and visible living organisms from invisible cells. But these parts don’t usually stand on their own. In a slightly fractal recursion, the parts of a superorganism lead fairly autonomous existences on their own. A superorganism such as an insect or mole rat colony contains many sub-individuals. These individual organisms eat, move about, get things done on their own. From most perspectives they appear complete. But in the case of the social insects and the naked mole rat these autonomous sub individuals need the super colony to reproduce themselves. In this way reproduction is a phenomenon that occurs at the level of the superorganism.

I define the One Machine as the emerging superorganism of computers. It is a megasupercomputer composed of billions of sub computers. The sub computers can compute individually on their own, and from most perspectives these units are distinct complete pieces of gear. But there is an emerging smartness in their collective that is smarter than any individual computer. We could say learning (or smartness) occurs at the level of the superorganism.

Supercomputers built from subcomputers were invented 50 years ago. Back then clusters of tightly integrated specialized computer chips in close proximity were designed to work on one kind of task, such as simulations. This was known as cluster computing. In recent years, we’ve created supercomputers composed of loosely integrated individual computers not centralized in one building, but geographically distributed over continents and designed to be versatile and general purpose. This later supercomputer is called grid computing because the computation is served up as a utility to be delivered anywhere on the grid, like electricity. It is also called cloud computing because the tally of the exact component machines is dynamic and amorphous – like a cloud. The actual contours of the grid or cloud can change by the minute as machines come on or off line.

There are many cloud computers at this time. Amazon is credited with building one of the first commercial cloud computers. Google probably has the largest cloud computer in operation. According to Jeff Dean one of their infrastructure engineers, Google is hoping to scale up their cloud computer to encompass 10 million processors in 1,000 locations.

Each of these processors is an off-the-shelf PC chip that is nearly identical to the ones that power your laptop. A few years ago computer scientists realized that it did not pay to make specialized chips for a supercomputer. It was far more cost effective to just gang up rows and rows of cheap generic personal computer chips, and route around them when they fail. The data centers for cloud computers are now filled with racks and racks of the most mass-produced chips on the planet. An unexpected bonus of this strategy is that their high production volume means bugs are minimized and so the generic chips are more reliable than any custom chip they could have designed.

If the cloud is a vast array of personal computer processors, then why not add your own laptop or desktop computer to it?  It in a certain way it already is. Whenever you are online, whenever you click on a link, or create a link, your processor is participating in the yet larger cloud, the cloud of all computer chips online. I call this cloud the One Machine because in many ways it acts as one supermegacomputer.

Gcc

The majority of the content of the web is created within this one virtual computer. Links are programmed, clicks are chosen, files are moved and code is installed from the dispersed, extended cloud created by consumers and enterprise – the tons of smart phones, Macbooks, Blackberries, and workstations we work in front of. While the business of moving bits and storing their history all happens deep in the tombs of server farms, the cloud’s interaction with the real world takes place in the extremely distributed field of laptop, hand-held and desktop devices. Unlike servers these outer devices have output screens, and eyes, skin, ears in the form of cameras, touch pads, and microphones. We might say the cloud is embodied primarily by these computer chips in parts only loosely joined to grid.

This megasupercomputer is the Cloud of all clouds, the largest possible inclusion of communicating chips. It is a vast machine of extraordinary dimensions. It is comprised of quadrillion chips, and consumes 5% of the planet’s electricity. It is not owned by any one corporation or nation (yet), nor is it really governed by humans at all. Several corporations run the larger sub clouds, and one of them, Google, dominates the user interface to the One Machine at the moment.

None of this is controversial. Seen from an abstract level there surely must be a very large collective virtual machine. But that is not what most people think of when they hear the term a “global superorganism.” That phrase suggests the sustained integrity of a living organism, or a defensible and defended boundary, or maybe a sense of self, or even conscious intelligence.

Sadly, there is no ironclad definition for some of the terms we most care about, such as life, mind, intelligence and consciousness. Each of these terms has a long list of traits often but not always associated with them.  Whenever these traits are cast into a qualifying definition, we can easily find troublesome exceptions. For instance, if reproduction is needed for the definition of life, what about mules, which are sterile?  Mules are obviously alive. Intelligence is a notoriously slippery threshold, and consciousness more so. The logical answer is that all these phenomenon are continuums. Some things are smarter, more alive, or less conscious than others. The thresholds for life, intelligence, and consciousness are gradients, rather than off-on binary.

With that perspective a useful way to tackle the question of whether a planetary superorganism is emerging is to offer a gradient of four assertions.

There exists on this planet:

  • I    A manufactured superorganism
  • II    An autonomous superorganism
  • III  An autonomous smart superorganism
  • IV  An autonomous conscious superorganism

These four could be thought of as an escalating set of definitions. At the bottom we start with the almost trivial observation that we have constructed a globally distributed cluster of machines that can exhibit large-scale behavior. Call this the weak form of the claim. Next come the two intermediate levels, which are uncertain and vexing (and therefore probably the most productive to explore). Then we end up at the top with the extreme assertion of “Oh my God, it’s thinking!”  That’s the strong form of the superorganism. Very few people would deny the weak claim and very few affirm the strong.

My claim is that in addition to these four strengths of definitions, the four levels are developmental stages through which the One Machine progresses. It starts out forming a plain superorganism, than becomes autonomous, then smart, then conscious. The phases are soft, feathered, and blurred. My hunch is that the One Machine has advanced through levels I and II in the past decades and is presently entering level III. If that is true we should find initial evidence of an autonomous smart (but not conscious) computational superorganism operating today.

But let’s start at the beginning.

LEVEL I
A manufactured superorganism

By definition, organisms and superorganisms have boundaries. An outside and inside. The boundary of the One Machine is clear: if a device is on the internet, it is inside. “On” means it is communicating with the other inside parts. Even though some components are “on” in terms of consuming power, they may be on (communicating) for only brief periods. Your laptop may be useful to you on a 5-hour plane ride, but it may be technically “on” the One Machine only when you land and it finds a wifi connection. An unconnected TV is not part of the superorganism; a connected TV is.  Most of the time the embedded chip in your car is off the grid, but on the few occasions when its contents are downloaded for diagnostic purposes, it becomes part of the greater cloud. The dimensions of this network are measurable and finite, although variable.

The One Machine consumes electricity to produce structured information. Like other organisms, it is growing. Its size is increasing rapidly, close to 66% per year, which is basically the rate of Moore’s Law. Every year it consumes more power, more material, more money, more information, and more of our attention. And each year it produces more structured information, more wealth, and more interest.

On average the cells of biological organisms have a resting metabolism rate of between 1- 10 watts per kilogram. Based on research by Jonathan Koomey a UC Berkeley, the most efficient common data servers in 2005 (by IBM and Sun) have a metabolism rate of 11 watts per kilogram. Currently the other parts of the Machine (the electric grid itself, the telephone system) may not be as efficient, but I haven’t found any data on it yet. Energy efficiency is a huge issue for engineers. As the size of the One Machine scales up the metabolism rate for the whole will probably drop (although the total amount of energy consumed rises).

The span of the Machine is roughly the size of the surface of the earth. Some portion of it floats a few hundred miles above in orbit, but at the scale of the planet, satellites, cell towers and servers farms form the same thin layer.  Activity in one part can be sensed across the entire organism; it forms a unified whole.

Within a hive honeybees are incapable of thermoregulation. The hive superorganism must regulate the bee’s working temperature. It does this by collectively fanning thousands of tiny bee wings, which moves hot air out of the colony. Individual computers are incapable of governing the flow of bits between themselves in the One Machine.

Prediction: the One Machine will continue to grow. We should see how data flows around this whole machine in response to daily usage patterns (see Follow the Moon). The metabolism rate of the whole should approach that of a living organism.

LEVEL II
An autonomous superorganism

Autonomy is a problematic concept. There are many who believe that no non-living entity can truly be said to be autonomous. We have plenty of examples of partial autonomy in created things. Autonomous airplane drones: they steer themselves, but they don’t repair themselves. We have self-repairing networks that don’t reproduce themselves. We have self-reproducing computer viruses, but they don’t have a metabolism. All these inventions require human help for at least aspect of their survival. To date we have not conjured up a fully human-free sustainable synthetic artifact of any type.

But autonomy too is a continuum. Partial autonomy is often all we need – or want. We’ll be happy with miniature autonomous cleaning bots that requires our help, and approval, to reproduce. A global superorganism doesn’t need to be fully human-free for us to sense its autonomy. We would acknowledge a degree of autonomy if an entity displayed any of these traits: self-repair, self-defense, self-maintenance (securing energy, disposing waste), self-control of goals, self-improvement. The common element in all these characteristics is of course the emergence of a self at the level of the superorganism.

In the case of the One Machine we should look for evidence of self-governance at the level of the greater cloud rather than at the component chip level. A very common cloud-level phenomenon is a DDoS attack. In a Distributed Denial of Service (DDoS) attack a vast hidden network of computers under the control of a master computer are awakened from their ordinary tasks and secretly assigned to “ping” (call) a particular target computer in mass in order to overwhelm it and take it offline. Some of these networks (called bot nets) may reach a million unsuspecting computers, so the effect of this distributed attack is quite substantial. From the individual level it is hard to detect the net, to pin down its command, and to stop it. DDoS attacks are so massive that they can disrupt traffic flows outside of the targeted routers – a consequence we might expect from an superorganism level event.

I don’t think we can make too much of it yet, but researchers such as Reginald Smith have noticed there was a profound change in the nature of traffic on the communications network in the last few decades as it shifted from chiefly voice to a mixture of data, voice, and everything else. Voice traffic during the Bell/AT&T era obeyed a pattern known as Poisson distribution, sort of like a Gaussian bell curve. But ever since data from diverse components and web pages became the majority of bits on the lines, the traffic on the internet has been following a scale-invariant, or fractal, or power-law pattern. Here the distribution of very large and very small packets fall out onto a curve familiarly recognized as the long-tail curve. The scale-invariant, or long tail traffic patterns of the recent internet has meant engineers needed to devise a whole set of new algorithms for shaping the teletraffic. This phase change toward scale-invariant traffic patterns may be evidence for an elevated degree of autonomy. Other researchers have detected sensitivity to initial conditions, “strange attractor” patterns and stable periodic orbits in the self-similar nature of traffic – all indications of self-governing systems. Scale-free distributions can be understood as a result of internal feedback, usually brought about by loose interdependence between the units. Feedback loops constrain the actions of the bits by other bits.  For instance the Ethernet collision detection management algorithm (CSMA/CD) employs feedback loops to manage congestion by backing off collisions in response to other traffic.  The foundational TCP/IP system underpinning internet traffic therefore “behaves in part as a massive closed loop feedback system.” While the scale free pattern of internet traffic is indisputable and verified by many studies, there is dispute whether it means the system itself is tending to optimize traffic efficiency – but some believe it is.

Unsurprisingly the vast flows of bits in the global internet exhibit periodic rhythms. Most of these are diurnal, and resemble a heartbeat. But perturbations of internet bit flows caused by massive traffic congestion can also be seen. Analysis of these “abnormal” events show great similarity to abnormal heart beats. They deviate from an “at rest” rhythms the same way that fluctuations of a diseased heart deviated from a healthy heart beat.

Prediction: The One Machine has a low order of autonomy at present. If the superorganism hypothesis is correct in the next decade we should detect increased scale-invariant phenomenon, more cases of stabilizing feedback loops, and a more autonomous traffic management system.

LEVEL III
An autonomous smart superorganism

Organisms can be smart without being conscious. A rat is smart, but we presume, without much self-awareness. If the One Machine was as unconsciously smart as a rat, we would expect it to follow the strategies a clever animal would pursue. It would seek sources of energy, it would gather as many other resources it could find, maybe even hoard them. It would look for safe, secure shelter. It would steal anything it needed to grow. It would fend off attempts to kill it. It would resist parasites, but not bother to eliminate them if they caused no mortal harm. It would learn and get smarter over time.

Google and Amazon, two clouds of distributed computers, are getting smarter. Google has learned to spell. By watching the patterns of correct-spelling humans online it has become a good enough speller that it now corrects bad-spelling humans. Google is learning dozens of languages, and is constantly getting better at translating from one language to another. It is learning how to perceive the objects in a photo. And of course it is constantly getting better at answering everyday questions. In much the same manner Amazon has learned to use the collective behavior of humans to anticipate their reading and buying habits. It is far smarter than a rat in this department.

Cloud computers such as Google and Amazon form the learning center for the smart superorganism. Let’s call this organ el Googazon, or el Goog for short. El Goog encompasses more than the functions the company Google and includes all the functions provided by Yahoo, Amazon, Microsoft online and other cloud-based services. This loosely defined cloud behaves like an animal.

El Goog seeks sources of energy. It is building power plants around the world at strategic points of cheap energy. It is using its own smart web to find yet cheaper energy places and to plan future power plants. El Goog is sucking in the smartest humans on earth to work for it, to help make it smarter. The smarter it gets, the more smart people, and smarter people, want to work for it. El Goog ropes in money. Money is its higher metabolism. It takes the money of investors to create technology which attracts human attention (ads), which in turns creates more money (profits), which attracts more investments.  The smarter it makes itself, the more attention and money will flow to it.

Manufactured intelligence is a new commodity in the world. Until now all useable intelligence came in the package of humans – and all their troubles.  El Goog and the One Machine offer intelligence without human troubles. In the beginning this intelligence is transhuman rather than non-human intelligence. It is the smartness derived from the wisdom of human crowds, but as it continues to develop this smartness transcends a human type of thinking. Humans will eagerly pay for El Goog intelligence. It is a different kind of intelligence. It is not artificial – i.e. a mechanical  — because it is extracted from billions of humans working within the One Machine. It is a hybrid intelligence, half humanity, half computer chip.  Therefore it is probably more useful to us. We don’t know what the limits are to its value. How much would you pay for a portable genius who knew all there was known?

With the snowballing wealth from this fiercely desirable intelligence, el Goog builds a robust network that cannot be unplugged. It uses its distributed intelligence to devise more efficient energy technologies, more wealth producing inventions, and more favorable human laws for its continued prosperity. El Goog is developing an immune system to restrict the damage from viruses, worms and bot storms to the edges of its perimeter. These parasites plague humans but they won’t affect el Goog’s core functions. While El Goog is constantly seeking chips to occupy, energy to burn, wires to fill, radio waves to ride, what it wants and needs most is money. So one test of its success is when El Goog becomes our bank. Not only will all data flow through it, but all money as well.

Nyt-Stocks

This New York Times chart of the October 2008 financial market crash shows how global markets were synchronized, as if they were one organism responding to a signal.

How far away is this? “Closer than you think” say the actual CEOs of Google, the company. I like the way George Dyson puts it:

If you build a machine that makes connections between everything, accumulates all the data in the world, and you then harness all available minds to collectively teach it where the meaningful connections and meaningful data are (Who is searching Whom?) while implementing deceptively simple algorithms that reinforce meaningful connections while physically moving, optimizing and replicating the data structures accordingly – if you do all this you will, from highly economical (yes, profitable) position arrive at a result – an intelligence — that is “not as far off as people think.”

To accomplish all this el Goog need not be conscious, just smart.

Prediction: The mega-cloud will learn more languages, answer more of our questions, anticipate more of our actions, process more of our money, create more wealth, and become harder to turn off.

LEVEL IV
An autonomous conscious superorganism

How would we know if there was an autonomous conscious superorganism? We would need a Turing Test for a global AI. But the Turing Test is flawed for this search because it is meant to detect human-like intelligence, and if a consciousness emerged at the scale of a global megacomputer, its intelligence would unlikely to be anything human-like.  We might need to turn to SETI, the search for extraterrestrial intelligence (ETI), for guidance. By definition, it is a test for non-human intelligence. We would have to turn the search from the stars to our own planet, from an ETI, to an ii – an internet intelligence. I call this proposed systematic program Sii, the Search for Internet Intelligence.

This search assumes the intelligence we are looking for is not human-like. It may operate at frequencies alien to our minds. Remember the tree-ish Ents in Lord of the Rings? It took them hours just to say hello. Or the gas cloud intelligence in Fred Hoyle’s “The Black Cloud”. A global conscious superorganism might have “thoughts” at such a high level, or low frequency, that we might be unable to detect it. Sii would require a very broad sensitivity to intelligence.

But as Allen Tough, an ETI theorist told me, “Unfortunately, radio and optical SETI astronomers pay remarkably little attention to intelligence.  Their attention is focused on the search for anomalous radio waves and rapidly pulsed laser signals from outer space.  They do not think much about the intelligence that would produce those signals.” The cloud computer a global superorganism swims in is nothing but unnatural waves and non-random signals, so the current set of SETI tools and techniques won’t help in a Sii.

For instance, in 2002 researchers analyzed some 300 million packets on the internet to classify their origins. They were particularly interested in the very small percentage of packets that passed through malformed. Packets (the message’s envelope) are malformed by either malicious hackers to crash computers or by various bugs in the system. Turns out some 5% of all malformed packets examined by the study had unknown origins – neither malicious origins nor bugs. The researchers shrug these off. The unreadable packets are simply labeled “unknown.” Maybe they were hatched by hackers with goals unknown to the researches, or by bugs not found. But a malformed packet could also be an emergent signal. A self-created packet. Almost by definition, these will not be tracked, or monitored, and when seen shrugged off as “unknown.”

There are scads of science fiction scenarios for the first contact (awareness) of an emerging planetary AI. Allen Tough suggested two others:

One strategy is to assume that Internet Intelligence might have its own web page in which it explains how it came into being, what it is doing now, and its plans and hopes for the future. Another strategy is to post an invitation to ii (just as we have posted an invitation to ETI).  Invite it to reveal itself, to dialogue, to join with us in mutually beneficial projects. It is possible, of course, that Internet Intelligence has made a firm decision not to reveal itself, but it is also possible that it is undecided and our invitation will tip the balance.

The main problem with these “tests” for a conscious ii superorganism is that they don’t seem like the place to begin. I doubt the first debut act of consciousness is to post its biography, or to respond to an evite. The course of our own awakening consciousness when we were children is probably more fruitful. A standard test for self-awareness in a baby or adult primate is to reflect its image back in a mirror. When it can recognize its mirrored behavior as its own it has a developed sense of self. What would the equivalent mirror be for an ii?

But even before passing a mirror test, an intelligent consciousness would acquire a representation of itself, or more accurately a representation of a self. So one indication of a conscious ii would be the detection of a “map” of itself. Not a centrally located visible chart, but an articulation of its being. A “picture” of itself. What was inside and what was outside.  It would have to be a real time atlas, probably distributed, of what it was. Part inventory, part operating manual, part self-portrait, it would act like an internal mirror. It would pay attention to this map. One test would be to disturb the internal self-portrait to see if the rest of the organism was disturbed. It is important to note that there need be no self-awareness of this self map. It would be like asking a baby to describe itself.

Long before a conscious global AI tries to hide itself, or take over the world, or begin to manipulate the stock market, or blackmail hackers to eliminate any competing ii’s (see the science fiction novel “Daemon”), it will be a fragile baby of a superorganism. It’s intelligence and consciousness will only be a glimmer, even if we know how to measure and detect it. Imagine if we were Martians and didn’t know whether human babies were conscious or not. How old would they be before we were utterly convinced they were conscious beings? Probably long after they were.

Prediction: The cloud will develop an active and controlling map of itself (which includes a recursive map in the map), and a governing sense of “otherness.”

What’s so important about superorganism?

We don’t have very scientific tests for general intelligence in animals or humans. We have some tests for a few very narrow tasks, but we have no reliable measurements for grades or varieties of intelligence beyond the range of normal IQ tests. What difference does it make whether we measure a global organism? Why bother?

Measuring the degree of self-organization of the One Machine is important for these reasons:

  • 1) The more we are aware of how the big cloud of this Machine behaves, the more useful it will be to us. If it adapts like an organism, then it is essential to know this. If it can self-repair, that is vital knowledge. If it is smart, figuring the precise way it is smart will help us to be smarter.
  • 2) In general, a more self-organized machine is more useful. We can engineer aspects of the machine to be more ready to self-organize. We can favor improvements that enable self-organization. We can assist its development by being aware of its growth and opening up possibilities in its development.
  • 3) There are many ways to be smart and powerful. We have no clue to the range of possibilities a superorganism this big, made out of a billion small chips, might take, but we know the number of possible forms is more than one. By being aware early in the process we can shape the kind of self-organization and intelligence a global superorganism could have.

As I said, I am not the first nor only person to consider all this. In 2007 Philip Tetlow published an entire book, The Web’s Awake, exploring this concept. He lays out many analogs between living systems and the web, but of course they are only parallels, not proof.

I welcome suggestions, additions, corrections, and constructive comments. And, of course, if el Goog has anything to say, just go ahead and send me an email.

What kind of evidence would you need to be persuaded we have Level I, II, III, or IV?

————————————————————–

Mind-Bending Global Internet Stats for 2010

January 15, 2011   Techni Glee!   1 Comment
internetPingdomhas released some amazing internet statistics for 2010.  The internet is growing at an alarming rate.  So fast, it’s baffling to think what these stats will look like next year.  Here’s an overview.  All figures are global and stated for the year 2010 unless otherwise noted:Internet Users:The total number of internet users as of June 2010 was 1.97 billion.  This was an increase of 14% since the year before.  There were 266.2 million users in the US, 475 million in Europe and 825 million in Asia.Email: 107 trillion emails were sent at a rate of 262 billion per day.  There are now over 1.88 billion email users, that’s 480 millions users more than the previous year.  There are now 2.9 billion email accounts, 25% percent of those are corporate.

Websites: As of December 2010 there were a total of 255 million websites on the world wide web.  21.4 million (almost 10% of those were added in 2010).  By the end of 2010 there were 88.8 million .coms, 13.2 million .nets and 8.6 million .orgs.

Social Medial growth was explosive in 2010.  Check out these stats:

Facebook: By the end of 2010 there were 600 million people on Facebook.  250 million of those people joined in 2010.  That’s almost a doubling in one year!  70% of Facebook growth occured outside of the US.

Twitter: There were 175 million people on Twitter as of September 2010.  100 million of those accounts were created in 2010!  The number of 2010 tweets?  25 billion.

Video: 2 billion YouTube videos were watched per day.  35 hours of video were uploaded to YouTube every minute.  Wow!  Over 2 billion videos were watched per month on Facebook.  20 million videos were uploaded to Facebook every month.  The average internet user watched 186 videos per month.

Images: By the end of 2010, Flickr hosted 5 billion photos.  3,000 photos were uploaded to Flickr every minute.  3 billion photos were uploaded per month to Facebook.

 
———————————————————
 
 

The Impact of the Internet on the Global Brain

 

 

 

 

1. The Concepts of the Superorganism and the Global Brain

Over the next several hundred years, as humanity grows in numbers, progresses in technology, and generally, learns how to control its environment with more accuracy and efficiency, the structure of society is likely to change in some very significant ways. Because, of its already noticeable impact on the every day lives of many people, the Internet has attracted attention as something which may well bring about significant changes. The tendency of the Internet to promote connected-nesss between individuals and its ability to disseminate large and diverse quantities of information has led many theorists to espouse the concepts of the superorganism and global brain as descriptions of future human society.

Ideas of the human superorganism and global brain first appeared in modern form in Herbert Spencer’s The Principles of Sociology (1876-96)1. The superorganism idea gained scientific support from the work of the notable Russian biogeochemist Vladimir Vernadsky. He performed groundbreaking studies of the large scale biochemical processes of the earth, and was the first to think of the Earth and all living things as a single biosphere2. While the biosphere concept deals with the Earth as a whole, Vernadsky also coined the term, noosphere, which more specifically denotes “the network of thoughts, information and communication that englobes the planet.”1 This network could only be a phenomenon attributed to humans, and in 1955, Pierre Teilhard de Chardin published his work, The Phenomenon of Man, in which he popularized the term, noosphere1, and the concept of the human superorganism and the global brain. Since then, a wide range of thinkers has taken up these concepts and developed them using today’s knowledge of the world and humanity.

The most straightforward way of approaching the concept of the human superorganism is through trend. By examining a significant trend throughout the development of living creatures and humanity, in particular, it is possible to extrapolate the development of the human superorganism. This trend has been for systems of single entities acting separately to combine into systems of many entities acting in cooperation. The many-entity system can then be viewed as a single, larger entity, often with significant qualities which cannot be attributed to the individual entities making up the system.

The most basic example of this is the organic cell. Within a cell are approximately 1010 molecules all cooperating to perform the functions of the cell3. Any cell is considered to be alive. However, one would never attribute this quality to the individual molecules. As such, it is apparent that a new and significant quality has arisen from the combination of the molecules.

The creation of this new quality of life by the combination of molecules can be further examined. Because no molecule alone is alive, no one could ever predict the phenomenon of life by considering only one molecule. However, if the interactions between molecules are also considered, it is possible that systems complex enough to exhibit the characteristics of life could be predicted. This emphasizes the idea that the interactions between the single entities in a complex system are integral to the formation of the new quality. The new quality does not simply arise from having many single entities all together. Indeed, a random collection of 1010 molecules is very unlikely to result in a lifeform.

This tendency of individual entities to combine to form larger, more complex entities manifests itself further in the structure of multicelled organisms. The single cell organisms, which existed before the development of multicelled organisms, realized a distinct survival advantage by cooperating with other single cell organisms to satisfy their basic needs. The level of cooperation between single celled organisms evolved until the cooperative units could be viewed as single entities or multicell organisms.

Just as before with the molecules and cells, the multicell organisms, or animals, have characteristics which cannot be attributed to the individual cells. The most striking example of this is sentience in humans. Individual cells cannot be thought of as sentient, but the collection of them in the human brain has this quality. And as before, the new quality of sentience arises from the interactions of the individual brain cells.

Speculating about the future development of humanity, proponents of the superorganism concept notice this same trend in human societies with modern free market economies. In these societies, individuals acting for their own benefit often find that cooperation with another individual serves to further the individual interests of both parties. Indeed, this concept has been developing for millennia and has reached the point where people spend the majority of their lives specializing in one area of expertise. People provide work in narrow areas that somehow benefit others and contract with others to provide for their needs in all other areas.

As science advances and society develops, this trend of specialization intensifies. That this has been occurring becomes apparent by examining the development of civilization over the past several hundred years. Society was relatively simple in the 1400’s, with the majority of people specializing in agriculture and working mainly to support themselves, their families, and their feudal lord. As the world was explored and scientific discoveries slowly made, occupations became more diversified. Advances in engineering such as the printing press and long distance, ocean-going ships caused a need for people to pursue careers other than farming. More people took up trading, and the cities became more highly populated until they gained enough power to supplant the obsolete feudal powers.

The incredible rate at which scientific discoveries and technology advances are made today ensures that the need to specialize will only increase. As humans become even more specialized in narrow fields of expertise, the need for cooperation becomes even greater. In general, the more a person has specialized in one field, the less they know about other fields, and the more they must rely on others to provide knowledge or services in other fields. In this manner, the complexity of societal interactions and cooperation is constantly increasing with developments in science and technology.

Thus, we are reminded of the single cell organisms whose level of cooperation increased to the point where all cooperating cells could be seen as a single, more complex multicell organism. In the same way, if one assumes that the progress of humanity will continue in the manner we have seen throughout history, we can extrapolate the development of a human superorganism. Such a superorganism would result from the highly complex interactions and cooperation of the human individuals as they each act naturally in their own best interests.

The guiding intellect of this human superorganism can be defined as the global brain. Following the trend we have been discussing, this global brain would likely have qualities which cannot be attributed to individual humans. Just as a cell cannot comprehend the concept of a human or of sentience, we may not be able to fully understand the nature of the superorganism and the global brain. However, one can hope that our sentience and investigative nature might give us more success than the cell.

To understand the influence of the Internet of the global brain it is necessary to discuss the current state of the global brain. As testified to by all the present strife in the world, it is evident that the global brain, if at all existent, is still in a very primitive form. While there is a reasonable level of cooperation among members of some countries, cooperation among member of different societies is still quite limited by language and cultural differences. Also, at least half of the people in the world are not in political or economic situations in which they can participate in a superorganism and global brain. Clearly, humanity has far to progress and many obstacles to overcome before the global brain can emerge as a coherent entity.

2. The Impact of the Internet on the Global Brain

The advent of the Internet in the past decade has caused quite a stir among thinkers sympathetic to the global brain concept. It is thought that the Internet might be the ingredient needed to bring about the emergence of the global brain as a coherent entity. The reason the Internet inspires such optimism is due to its likeness to human brains and to its global-ness. Firstly, the organization of information on the world wide web in hypertext format closely resembles the associative connections formed by neurons in the brain. Secondly, the Internet network encompasses the entire globe, holding the possibility for linking all humans with a means of virtually instant interaction. Thirdly, it could conceivably store all human knowledge and provide instant access to that knowledge to its users.Besides simply noticing the trend of today’s civilization toward developing a global brain, some thinkers have suggested specific ways in which this might come about through development of the Internet. It should be noted that the mechanisms discussed below all have the development of the global brain as a secondary motivation. The primary motivation is increased efficiency of information gathering, which is desirable for individuals acting for their own benefit. This is important for the healthy development of the global brain. The global brain must develop as a natural process that benefits individuals and never as an end in itself. This helps to guarantee that human individuals will not find themselves being oppressed for the “good” of the global brain.

Francis Heylighen and Johan Bollen present what they term associative memory in their 1996 work The World-Wide Web as a Super-Brain: from metaphor to model.4 Associative memory denotes the process by which hypertext links on WWW sites remember how many times they get used and where they lie on sequences of links that people choose. Then based on this information sites would reorganize their links to minimize the lengths of paths that get chosen the most. For instance, if a certain path from site A to site B to site C to site D gets used often, site A would simply create a link directly to site D. Thus, the WWW would actively maintain its own high level of associative efficiency much like the human brain does.

Another more immediate development is the Webmind software being developed by IntelliGenesis Corp.5 This software implements a revolutionary method of storing information. Given a database of information (e.g. scientific data or Internet pages), Webmind separates each piece of information into a data entity. This data entity is given the ability to compare itself to all the other data entities. As such, each data entity continually interacts with the others and establishes associative links based on relationships between the data entities. These relationships can be specified by the users to meet their specific needs.

Currently, Webmind only functions with databases of digital numbers on individual computers or intranets. However, IntelliGenesis Corp. is very optimistic about developing it further to eventually handle information such as web pages. If this happens, each web page will be given the ability to interact with every other web page. Each page will tirelessly seek out other pages on the WWW, and through artificial intelligence techniques it will judge the relevance of content in other pages to its own subject matter. The page will then create and organize links to other pages based on its relation to them.

In this way, the WWW will actively organize information to increase the efficiency of information gathering. If a person was able to find even one page on the topic of concern, he would have links to all other sites related to that topic. The Webmind concept coupled with the associative memory idea would be a powerful tool for maintaining the efficiency of information gathering in the vast sea of randomly posted web sites that we have now.

Another further development in information gathering, which could be implemented in the next decade or so, is the use of Knowbots, proposed by Dr. Ben Goertzel at IntelliGenesis Corp.5 Knowbots are artificially intelligent agents, which work to gather information for their users. A Knowbot would be informed of the needs of the user and perhaps provided with some search criteria. Then it would actively search out web pages containing the information needed by the user. Aided by the relational links created using the Webmind and associative memory concept, it would be able to perform, in a reasonably short time, a comprehensive survey of human knowledge on the topic of concern. It would then collect the gathered information and organize it for presentation to the user. Thus, a person would simply need to specify a wish for certain information, and if that information is known to humanity, he will receive it in short order through the work of the Knowbot.

All of these information gathering mechanisms can be viewed as ways of improving the efficiency of interaction between people who are cooperating to perform various actions. The knowledge provided by a person with a certain specialty is more readily available for use by others who might need it. As such, the Internet, by dramatically increasing the level of cooperation between individuals, could possibly lead to a solidification of the global brain as a coherent entity.

References

1. Basic References on the Global Brain / Superorganism

http://www.aoe.vt.edu/~cwenger/cascade.html

2. “Vernadsky, Vladimir Ivanovich” Britannica Online.

http://www.eb.com:180/cgi-bin/g?DocF=micro/621/13.html

3. Russell, Peter. The Global Brain. J.P. Tarcher, Inc. Los Angeles, 1983.

4. Heylighen, Francis and Johan Bollen. The World-Wide Web as a Super-Brain: from metaphor to model. 1996.

http://pespmc1.vub.ac.be/papers/WWWSuperBRAIN.html

5. IntelliGenesis Corp. Home page

http://www.intelligenesis.net/

———————————————————–

References on the Global Brain / Superorganism

a collection of basic references, grouped by author, that explore the idea of the emerging planetary organism and its global brain, in the chronological order of first publication


This reference material serves as the basis for the “Global Brain” study groupwhich has been set up by some of the authors listed below. Further references to be added are very much appreciated.

Herbert Spencer
The Principles of Sociology (1876-96); (see intro and excerpt, including “Society is an organism”)

Remarkable to note how many recently fashionable ideas about superorganisms and evolutionary integration have already been proposed by this evolutionary thinker over a century ago. Spencer coined the phrase “survival of the fittest”, which was later taken over by Darwin.

 

Herbert G. Wells
World Brain (1938); (see also: H.G. Wellss’ Idea of a World Brain: A Critical Re-Assessment, by W. Boyd Rayward, Constructing the world mind, and Towards the World Brainby Eugene Garfield)

a science fiction writer’s prophetic vision of a world encyclopedia of knowledge that would provide a kind of global consciousness, like the world-wide web now does

 

Pierre Teilhard de Chardin:
“Le Phénomène Humain” (Seuil, Paris, 1955). (translated as : “The Phenomenon of Man” (1959, Harper & Row, New York)).

the mystical and poetic vision of future evolutionary integration by this paleontologist and jesuit priest anticipates many recent developments. Teilhard popularized Vernadsky’s term “noosphere” (mind sphere), denoting the network of thoughts, information and communication that englobes the planet.
See also: A Globe, Clothing Itself with a Brain.

 

 

Joël de Rosnay:
several books in French, including “L’Homme Symbiotique. Regards sur le troisième millénaire” (Seuil, Paris, 1996), translated as “The Symbiotic Man: A New Understanding of the Organization of Life and a Vision of the Future”, “Le Cerveau Planétaire” (Olivier Orban, Paris, 1986), “Le Macroscope” (Seuil, Paris, 1972), translated as: “The Macroscope“, Harper & Row, New York, 1975.

emergence of the “cybion” (cybernetic superorganism), analysed by means of concepts from systems theory, and the theories of chaos, self-organization and evolution. Applications to the new network and multimedia technologies and to questions of policy.

 

 

Valentin Turchin:
The Phenomenon of Science. A cybernetic approach to human evolution, (Columbia University Press, New York, 1977).

cybernetic theory of universal evolution, from unicellular organisms to culture and society, culminating in the emerging “super-being”, based on the concept of the Metasystem Transition.

 

 

Peter Russell:
“The Global Brain Awakens: Our next evolutionary leap” (Global Brain, 1996) (originally published in 1983 as “The Global Brain”). For an excerpt, see Towards a Global Brain

development of the superorganism theme in a “New Age” vision, with more emphasis on consciousness-raising techniques like meditation, and less on evolutionary mechanisms and technology.

 

 

Gottfried Mayer-Kress:
several papers, including:
Gottfried Mayer-Kress & Cathleen Barczys (1995): “The Global Brain as an Emergent Structure from the Worldwide Computing Network”, The Information Society 11 (1).

explores the analogies between global networks and complex adaptive systems, and the applications of the network to modelling complex problem domains
for a summary see: The Global Brain Concept

 

 

Gregory Stock:
“Metaman: the merging of humans and machines into a global superorganism”, (Simon & Schuster, New York, 1993). (order through Amazon)

an optimistic picture of the evolution of society, with many statistics on economic, social and technological progress, in which humans and machines unite, and where the individual is increasingly tied to others through technology

 

 

Brian R. Gaines:
The Collective Stance in Modeling Expertise in Individuals and Organizations, International Journal of Expert Systems. 7(1), (1994), pp. 22-51. (see also The Emergence of Knowledge through Modeling and Management Processes in Societies of Adaptive Agents. (Proceedings of the 10th Knowledge Acquisition Workshop, Banff, Alberta. pp. 24-1:24-13) and other Gaines articles)

an in-depth review of the literature on sociology, cognitive science and systems theory about the social function of knowledge; its “collective stance” views humanity as an organism partitioned into sub-organisms, such as organizations and individuals; proposes a positive feedback mechanism for the development of expertise, and therefore division-of-labor.

 

 

Francis Heylighen & Donald T. Campbell:
Selection of Organization at the Social Level: obstacles and facilitators of Metasystem Transitions, “World Futures: the journal of general evolution”, Vol. 45:1-4 (1995), p. 181.

a critical examination of the evolutionary mechanisms underlying the emergence of social “superorganisms”, like multicellular organisms, ant nests, or human organizations; concludes that humanity at present cannot yet be seen as a superorganism, and that there are serious obstacles on the road to further integration

 

 

Francis Heylighen & Johan Bollen:
“The World-Wide Web as a Super-Brain: from metaphor to model”in: R. Trappl (ed.) (1996): Cybernetics and Systems ’96 (Austrian Society for Cybernetic Studies), p. 917.

discusses the precise mechanisms (learning, thinking, spreading activation, …) through which a brain-like global network might be implemented, using the framework of the theory of metasystem transitions;
see also learning, brain-like webs and F. Heylighen: “The Global Superorganism: an evolutionary-cybernetic model of the emerging network society”, an extensive, in-depth review of the superorganism/global brain vision, and its implications for the future of society

 

 

Ben Goertzel:
World Wide Brain: The Emergence of Global Web Intelligence and How it Will Transform the Human Race

the concept of the WorldWideBrain as a massively parallel intelligence, consisting of structures and dynamics emergent from a community of intelligent WWW agents, distributed worldwide, with a discussion of social and philosophical implications, including a review of the discussions in the Global Brain Group;
See also: an older version of the previous paper, with some additional material, the webMind software developed by Intelligenesis, a software company co-founded by Goertzel, and “Wild Computing: Steps Toward a Philosophy of Internet Intelligence”, an electronic book expanding on Goertzel’s ideas.

 

 

David Williams
The Human Macro-organism as Fungus, Wired 4.04 (1996).

an intelligent parody of the superorganism view of society: “Pull Bill Gates out of his office and put him in the veldt – in four days he’s a bloated corpse in the sun. “;-)

 

Lee Li-Jen Chen and Brian R. Gaines:
A CyberOrganism Model for Awareness in Collaborative Communities on the Internet, International Journal of Intelligent Systems (IJIS), Vol. 12, No. 1. pp. 31-56. (1997)

an awareness-oriented framework for the web, based on Miller‘s “Living Systems” theory and a collective intelligence model, to conceptualize the Internet as an organism; particular emphasis on tools for time awareness;
see also: Modeling the Internet as a Cyberorganism: a Living Systems Framework and Investigative Methodologies for Virtual Cooperative Interaction, Chen’s 1997 PhD thesis

 

 

Howard Bloom:
The Lucifer Principle: a Scientific Expedition Into the Forces of History and Global Brain: The Evolution of Mass Mind From the Big BangTo the 21st Century

two popular books, which describe animal and human social groups as superorganisms, whose members merge their minds into a single mass-learning machine; explores the biological, evolutionary and historical origins of collective minds, before modern information technology; see excerpt “Superorganism” and a series of papers on the history of the global brain; see also “Beyond the Supercomputer: Social Groups as Self-invention Machines“.

The Symbiotic Intelligence Project:
a group at Los Alamos National Laboratory which studies self-organizing knowledge on distributed networks driven by human interaction. It has produced a few papers such asSymbiotic Intelligence and the Internet

Parker Rossman
Research On Global Crises, Still Primitive?an online “book in process”, about how humanity’s knowledge can be organized through collective intelligence in the form of a “world brain” system, to tackle various global problems, such as education, health care, peace, the environment, poverty and ethics.

 

 

John E. Stewart
Evolution’s Arrow: The direction of evolution and the future of humanity” (Chapman Press, Australia, 2000):argues that evolution progresses in the direction of cooperative organisations of greater scale and evolvability, up to global society.

 

 

Robert Wright
Non-Zero. The Logic of Human Destiny(Pantheon Books, 2000)A very well-written book that develops a similar argument as Stewart of evolutionary progress towards greater complexity, intelligence and eventually global integration of humankind, on the basis of a retelling of human history.

 

 

Michael Brooks
Global Brain, New Scientist magazine, 24 June 2000, p. 22.A rather sensationalist feature article about the work of global brain researchers, based on interviews with Heylighen, Bollen, Joslyn, Johnson and Goertzel. It emphasizes the scary, “Big Brother”-like possibilities, while minimizing the in-built protections against such abuse. For a somewhat more balanced view, read the accompanying editorial.

 

 

NSF/DOC
Converging Technologies for Improving Human Performance: Nanotechnology, Biotechnology, Information Technology and Cognitive Science (2002),a 400 page report by a large group of specialists about how these technologies working together can change our life during the next twenty years, and turn society into an “interconnected brain”

Global Brain Group
Papers from the 1st Global Brain Workshopa collection of abstracts, Powerpoint presentations and papers of the talks held at this workshop in July 2001, using a variety of perspectives to look at the Global Brain idea

More links


Copyright© 2003 Principia CyberneticaReferencing this page

Author
F. Heylighen,Date
Apr 17, 2003 (modified)
4 Oct 1996 (created)
Home

Metasystem Transition Theory

The Future of Humanity

The Social Superorganism and its Global Brain
Up
Prev. Next
Down


 


Discussion


Add comment…

2 Responses to “Greedy Google blocks the global internet superorganism mind: many references”

  1. Recai August 28, 2011 at 8:08 pm #

    Greedy Google is monetizing internet by keeping it stupid. Twitter and facebook also competing to monetize same stupid internet. Smart internet will free our world from slavery of money and stupidity. Thry are stealing our common heritage internet by keeping it stupid.

  2. Recai August 28, 2011 at 7:52 pm #

    Greedy Google is killing the innovation …
    Internet is growing so fast. It is like a giant without an intelligent head. Greedy Google is sitting in the headand blocking any innovation to turn it into an intelligent head. You know the painful difference between a smart giant and a brain damaged giant. Our world would be a heaven with a megasuper global mind. Greedy Google is monetizing our global common heritage and ruining our global future.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: