IBM vastes tax dollars 50 years after Rosenblatt’s Perceptrons

2 Sep

I want to build Watson internet to transform our world and free it from the slavery and corruption and all terrible evils in our lives on the world and lift mankind into a nexgen civilization. Our world will be free of wars, diseases, hunger, poverty, crises, dictators, corruption, crimes and of dangers of cosmic collision and radiaton too. That is why I am seeking partners to make a Watson search engine. However, greedy IBM is vasting Tax dollars from the Department of Defence on its so called first cognitive chip. IBM is lying when it says it is the beginning of a new era in computing. If you look at the pieces here you will see that 50 years ago Frank Rosenblatt built the perceptrons on an IBM machine which was a really learning machine like a brain. But, so called stupid god of computing Prof. Minsky killed it. IBM did not care about it then and does not show any respect to history of cognitive computing and Perceptrons of Rosenblatt who died on a boating accident on 1971. Do not believe until you see this IBM cognitive chip is really doing what they say it is doing. So, I am seeking partners to put IBM Watson on the internet to let all internet users get the answers to any question. Partners want to know my personal history. I want to make a documentary by compiling television and press pieces already out there in Turkey. They are showing my struggle against human rights abuses and persecution before I came to this beautiful country in 2000. As Tropicana Casino agent robbed my life by bad checks I am looking funding to make this documentary and show it in film festivals. I want to prove that I am serious about this struggle to transform our world by this Watson internet search engine.

Visit here for more updated fresh info at: watson4president.wordpress.com

================================================

International Business Machines Corp. (IBM) has developed a computer chip inspired by the human brain that may predict tsunamis and highlight risks in financial markets.

International Business Machines Corp. (IBM) has developed a computer chip inspired by the human brain that may predict tsunamis and highlight risks in financial markets. Photographer: Johannes Eisels/AFP/Getty Images

International Business Machines Corp. (IBM) has developed a computer chip inspired by the human brain that may predict tsunamis and highlight risks in financial markets.

The technology, called cognitive computing, is programmed to recognize patterns, make predictions and learn from mistakes, human-like capabilities not possible using today’s best computers. It’s a sharp departure from traditional chip design concepts, IBM said in a statement today.

Systems built with the new chip can synthesize events currently occurring and make decisions in real time, the Armonk, New York-based company said. Sponsored by the U.S. Department of Defense, IBM has received $21 million more in funding after today’s developments to bring the technology to applications. The advancement marks the end of a conceptual phase and initiates the first step to find out how to bring the chips to scale for production, said Kelly Sims, an IBM spokeswoman.

“We’re inventing a new system, changing the game,” Dharmendra Modha, the project’s leader, said in an interview. “It’s a new generation of computers, bypassing the hurdles faced by today’s computers.”

While current computers handle commands individually on a linear if/then basis, Modha said machines equipped with the new chips will “rewire themselves on the fly.” Without any set programming, the devices reach decisions through integrated memory, computation and communication cores that resemble synapses, neurons and axons, respectively, in the brain’s nervous system. The sensor network is three times larger than any ever created, he said.

Reacting to Surroundings

Cognitive computers may react to taste, touch, smells and sound, Modha said, while consuming less power and volume than today’s technology.

The core of the device has 3.8 million transistors in a 4.2 millimeters square of silicon, a tiny scale for what the chip accomplishes, said Richard Doherty, research director at Seaford, New York-based Envisioneering Group, a technology assessment firm.

“Research like this has been going on for decades, but this marks the beginning of commerce, industry and profits for cognitive computing,” Doherty said in an interview. “They’re going to be more like humans and primates than ever before.”

Current chips using 2 billion transistors couldn’t perform similar tasks, he said, and they’d have to be 10 times as large.

“We’re giving birth to a new type of architecture-enabling applications to ask computers to help us with today,” Modha said.

Potential Uses

Some of that help may come, for example, in the form of predicting tsunamis or creating financial opportunities, he said. By constantly recording and reporting temperature, pressure, wave height and ocean tides, such a system could issue a warning for giant waves based on its decision making.

“In a financial sense, it would be looking not for micro opportunities, but a macro pattern that spots anomalies, risks, and creates new possibilities,” he said.

The technology also might be used in a doctor’s glove, providing real-time patient data during surgery based on the body’s texture, smell and temperature.

“The impact of this is inevitable, but the timing is unpredictable,” Modha said. “As a civilization we’re computing with half a brain, and now we’re bringing forth that other half to complement it.”

Marvin Minsky

From Wikipedia, the free encyclopedia

Jump to: navigation, search

Marvin Lee Minsky (born August 9, 1927) is an American cognitive scientist in the field of artificial intelligence (AI), co-founder of Massachusetts Institute of Technology‘s AI laboratory, and author of several texts on AI and philosophy.

[edit] Biography

Marvin Lee Minsky was born in New York City to a Jewish family,[1] where he attended The Fieldston School and the Bronx High School of Science. He later attended Phillips Academy in Andover, Massachusetts. He served in the US Navy from 1944 to 1945. He holds a BA in Mathematics from Harvard (1950) and a PhD in the same field from Princeton (1954).[2] He has been on the MIT faculty since 1958. In 1959[3] he and John McCarthy founded what is now known as the MIT Computer Science and Artificial Intelligence Laboratory. He is currently the Toshiba Professor of Media Arts and Sciences, and Professor of electrical engineering and computer science.

Minsky won the Turing Award in 1969, the Japan Prize in 1990, the IJCAI Award for Research Excellence in 1991, and the Benjamin Franklin Medal from the Franklin Institute in 2001.[4]

Isaac Asimov described Minsky as one of only two people he would admit were more intelligent than he was, the other being Carl Sagan.[5] Patrick Winston has also described Minsky as the smartest person he has ever met. Ray Kurzweil has referred to Minsky as his mentor.

3D profile of a coin (partial) measured with a modern confocal white light microscope.

Minsky’s inventions include the first head-mounted graphical display (1963) and the confocal microscope[6] (1957, a predecessor to today’s widely used confocal laser scanning microscope). He developed, with Seymour Papert, the first Logoturtle“. Minsky also built, in 1951, the first randomly wired neural network learning machine, SNARC.

Minsky wrote the book Perceptrons (with Seymour Papert), which became the foundational work in the analysis of artificial neural networks. This book is the center of a controversy in the history of AI, as some claim it to have had great importance in driving research away from neural networks in the 1970s, and contributing to the so-called AI winter. That said, few of the mathematical proofs present in the book, which are still important and interesting to the study of perceptron networks, were ever countered. He also founded several other famous AI models. His book “A framework for representing knowledge” created a new paradigm in programming. While his “Perceptrons” now is more historical than practical book, the theory of frames is in wide use. Minsky was an adviser[7] on the movie 2001: A Space Odyssey and is referred to in the movie and book.

Frank Rosenblatt

From Wikipedia, the free encyclopedia

Jump to: navigation, search

The gravestone of Frank Rosenblatt, Brooktondale, NY.

Frank Rosenblatt (11 July 1928 – 11 July 1971) was a New York City born computer scientist who completed the Perceptron, or MARK 1, computer at Cornell University in 1960. This was the first computer that could learn new skills by trial and error, using a type of neural network that simulates human thought processes.

Rosenblatt’s perceptrons were initially simulated on an IBM 704 computer at Cornell Aeronautical Laboratory in 1957. By the study of neural networks such as the Perceptron, Rosenblatt hoped that “the fundamental laws of organization which are common to all information handling systems, machines and men included, may eventually be understood.”

A 1946 graduate of the Bronx High School of Science, Rosenblatt was a colorful character at Cornell in the early 1960s. A handsome bachelor, he drove a classic MGA sports car and was often seen with his cat named Tobermory. He enjoyed mixing with undergraduates, and for several years taught an interdisciplinary undergraduate honors course entitled “Theory of Brain Mechanisms” that drew students equally from Cornell’s Engineering and Liberal Arts colleges.

This course was a melange of ideas drawn from a huge variety of sources: results from experimental brain surgery on epileptic patients while conscious, experiments on measuring the activity of individual neurons in the visual cortex of cats, studies of loss of particular kinds of mental function as a result of trauma to specific areas of the brain, and various analog and digital electronic circuits that modeled various details of neuronal behavior (i.e. the perceptron itself, as a machine).

There were also some breathtaking speculations, based on what was known about brain behavior at this time (well before the CAT or PET scan was available), including one calculation that, based on the number of neuronal connections in a human brain, the human cortex had enough storage space to hold a complete “photographic” record of its perceptual inputs, stored at the 16 frames-per-second rate of flicker fusion, for about two hundred years.

In 1962 Rosenblatt published much of the content of this honors course in the book “Principles of neurodynamics: Perceptrons and the theory of brain mechanisms” (Spartan Books, 1962) which he used thereafter as a textbook for the course.

Research on similar devices was also being done in other places such as SRI, and many researchers had big expectations on what they could do. The initial excitement became somewhat reduced, though, when in 1969 Marvin Minsky and Seymour Papert published the book Perceptrons with mathematical proofs that elucidated some of the characteristics of the three-layer feed-forward perceptrons. For one side, they demonstrated some of the advantages of using them on certain cases. But they also presented some limitations. The most important one was the impossibility of implementing general functions using only “local” neurons, that don’t have all inputs available. This was taken by many people as one of the most important characteristics of perceptrons.

Rosenblatt died in a boating incident in 1971. He is buried at Quick Cemetery in Brooktondale, New York. After research on neural networks returned to the mainstream in the 1980s, new researchers started to study his work again. This new wave of study on neural networks is interpreted by some researchers as being a contradiction of hypotheses presented in the book Perceptrons, and a confirmation of Rosenblatt’s expectations, but the extent of this is questioned by some. [1]

In 2004 the IEEE established the Frank Rosenblatt Award, for “outstanding contributions to the advancement of the design, practice, techniques or theory in biologically and linguistically motivated computational paradigms including but not limited to neural networks, connectionist systems, evolutionary computation, fuzzy systems, and hybrid intelligent systems in which these paradigms are contained.”[2]

============================================================

 

Although the perceptron initially seemed promising, it was eventually proved that perceptrons could not be trained to recognise many classes of patterns. timeline of artificial intelligence The history of artificial intelligence begins in Antiquity with myths stories and rumors of artificial beings endowed with intelligence See also History of artificial intelligence the first AI winter and the second AI winter An AI Winter is a collapse in the perception Frank Rosenblatt ( 11 July, 1928 &ndash 1971 was a New York City born computer scientist who completed the Perceptron, or MARK 1 computer at This led to the field of neural network research stagnating for many years, before it was recognised that a feedforward neural network with three or more layers (also called a multilayer perceptron) had far greater processing power than perceptrons with one layer (also called a single layer perceptron) or two. A feedforward neural network is an Artificial neural network where connections between the units do not form a Directed cycle. A feedforward neural network is an Artificial neural network where connections between the units do not form a Directed cycle. Single layer perceptrons are only capable of learning linearly separable patterns; in 1969 a famous book entitled Perceptrons by Marvin Minsky and Seymour Papert showed that it was impossible for these classes of network to learn an XOR function. In Geometry, when two sets of points in a two-dimensional graph can be completely separated by a single line they are said to be linearly separable. Marvin Lee Minsky (born August 9, 1927) is an American cognitive scientist in the field of Artificial intelligence (AI co-founder Seymour Papert (born February 29, 1928 in Pretoria South Africa) is an MIT Mathematician, computer scientist, and They conjectured (incorrectly) that a similar result would hold for a perceptron with three or more layers. Three years later Stephen Grossberg published a series of papers introducing networks capable of modelling differential, contrast-enhancing and XOR functions. Stephen Grossberg is a cognitive scientist, neuroscientist biomedical engineer mathematician and neuromorphic technologist (The papers were published in 1972 and 1973, see e. g. : Grossberg, Contour enhancement, short-term memory, and constancies in reverberating neural networks. Studies in Applied Mathematics, 52 (1973), 213-257, online [1]). Nevertheless the often-cited Minsky/Papert text caused a significant decline in interest and funding of neural network research. It took ten more years until the neural network research experienced a resurgence in the 1980s. Traditionally the term neural network had been used to refer to a network or circuit of biological neurons. This text was reprinted in 1987 as “Perceptrons – Expanded Edition” where some errors in the original text are shown and corrected.

More recently, interest in the perceptron learning algorithm has increased again after Freund and Schapire (1998) presented a voted formulation of the original algorithm (attaining large margin) and suggested that one can apply the kernel trick to it. In Machine learning, the kernel trick is a method for using a Linear classifier algorithm to solve a non-linear problem by mapping the original non-linear observations The kernel-perceptron not only can handle nonlinearly separable data but can also go beyond vectors and classify instances having a relational representation (e. g. trees, graphs or sequences).

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: